From patchwork Thu Feb 25 20:47:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12104931 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FE1AC433E9 for ; Thu, 25 Feb 2021 20:49:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DAB9D64E28 for ; Thu, 25 Feb 2021 20:49:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233440AbhBYUsr (ORCPT ); Thu, 25 Feb 2021 15:48:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50410 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232923AbhBYUsl (ORCPT ); Thu, 25 Feb 2021 15:48:41 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E7A2EC061786 for ; Thu, 25 Feb 2021 12:48:00 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id v62so7552642ybb.15 for ; Thu, 25 Feb 2021 12:48:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=dqqMnpXbv9GtgsoIAfcqn+NffK0Y9luh9hhBXNIAc1w=; b=LJrgs6eZBraYVLoHmXIbR/ceJisGaqISWo801gt9OBRf/fqEOrARTU61943OgAt7yN uTJEXaSbNbI/J/kZvNqKJ+YbIIkcl99nWfok3tiBtpDuFv0Bo7Qcy3Su+JkCC+kevDJ4 kI+FNjrBFAWNaXCKT+o3POGGNrXRZm6mSJy+36NZdwIrhD/ftaXw7gMEqSk1JZ+rXoKe nzA57bjON7jkDgenv8nutLeRNRzibmkp767w6dtzFVXD07PXspPK9ttwJBRksipLFbok RWddPDWAVkSdKPHkgU4SqywGssuRH3AQxtKIjyT8hPEQ82EkXSCaX7nHpv1UmPEEWz6t BRog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=dqqMnpXbv9GtgsoIAfcqn+NffK0Y9luh9hhBXNIAc1w=; b=kja+sqnirUmdW6jZbNSPLVBh3zCvCT58JxwlOtAoMxUwDv6fidkfo1ih8SP/ffB8PA gQCCyibCOK2Phc/dSYUpECLHT3FVezFARtDMxFzefh5oI6Q1UHsBuc01nmW9dbgKiJAh r9qLODDJnQZJzgp0Q2Yxlokl5TzM10q86h75bvf3CVjd8ES5BhnQMryqB9ANIl4rPj3x LbrpMzNG4DXq8VPkCXKJIQdvAJZJLuQ8OKrJcomu/GqL9YPE+Kf1lDIyhbyxgCfSUTBx H+poPxN+GYEUaWxKp09DE0vLyT0BIK41ZoCNQKi6rgpBw3oWX1huvRJKe6RC16TmBavB bxIQ== X-Gm-Message-State: AOAM533vNf+pVsZiTCUUnr5MUZ/uH9sjc2rLsABvJhA7jwnM544V6Jlr f/h+dCOOW3zrTAK4GEN+8FpwYwwcEmM= X-Google-Smtp-Source: ABdhPJz2moruALIIKXz6XYFjRun2PuAcc2wsLbw3y5G/mYWepR5qnXm0prqWXw9TX9g8mKVJPKdKrN1xgy0= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:34c4:7c1d:f9ba:4576]) (user=seanjc job=sendgmr) by 2002:a25:74d4:: with SMTP id p203mr6436875ybc.75.1614286080185; Thu, 25 Feb 2021 12:48:00 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 25 Feb 2021 12:47:26 -0800 In-Reply-To: <20210225204749.1512652-1-seanjc@google.com> Message-Id: <20210225204749.1512652-2-seanjc@google.com> Mime-Version: 1.0 References: <20210225204749.1512652-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 01/24] KVM: x86/mmu: Set SPTE_AD_WRPROT_ONLY_MASK if and only if PML is enabled From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Check that PML is actually enabled before setting the mask to force a SPTE to be write-protected. The bits used for the !AD_ENABLED case are in the upper half of the SPTE. With 64-bit paging and EPT, these bits are ignored, but with 32-bit PAE paging they are reserved. Setting them for L2 SPTEs without checking PML breaks NPT on 32-bit KVM. Fixes: 1f4e5fc83a42 ("KVM: x86: fix nested guest live migration with PML") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu_internal.h | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 72b0928f2b2d..ec4fc28b325a 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -81,15 +81,15 @@ static inline struct kvm_mmu_page *sptep_to_sp(u64 *sptep) static inline bool kvm_vcpu_ad_need_write_protect(struct kvm_vcpu *vcpu) { /* - * When using the EPT page-modification log, the GPAs in the log - * would come from L2 rather than L1. Therefore, we need to rely - * on write protection to record dirty pages. This also bypasses - * PML, since writes now result in a vmexit. Note, this helper will - * tag SPTEs as needing write-protection even if PML is disabled or - * unsupported, but that's ok because the tag is consumed if and only - * if PML is enabled. Omit the PML check to save a few uops. + * When using the EPT page-modification log, the GPAs in the CPU dirty + * log would come from L2 rather than L1. Therefore, we need to rely + * on write protection to record dirty pages, which bypasses PML, since + * writes now result in a vmexit. Note, the check on CPU dirty logging + * being enabled is mandatory as the bits used to denote WP-only SPTEs + * are reserved for NPT w/ PAE (32-bit KVM). */ - return vcpu->arch.mmu == &vcpu->arch.guest_mmu; + return vcpu->arch.mmu == &vcpu->arch.guest_mmu && + kvm_x86_ops.cpu_dirty_log_size; } bool is_nx_huge_page_enabled(void); From patchwork Thu Feb 25 20:47:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12104933 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28AA7C433DB for ; Thu, 25 Feb 2021 20:49:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F2F5364E24 for ; Thu, 25 Feb 2021 20:49:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233557AbhBYUsv (ORCPT ); Thu, 25 Feb 2021 15:48:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50422 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233161AbhBYUsn (ORCPT ); Thu, 25 Feb 2021 15:48:43 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 98843C061788 for ; Thu, 25 Feb 2021 12:48:03 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id d8so7627560ybs.11 for ; Thu, 25 Feb 2021 12:48:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=+xZ858irlb6FpgV9qRqhKqfeu+ijlTKA7VlYstFtMqQ=; b=PtDP0nkf66ltqftVB5b4R66eJDmEzTvZ/GwTuLwCpuuPlat3uIbiyQaAJ1fJMXbnKB ZtMd9SgzJYo2oo+qZ1IHkP3fLqL0yJqHSNO4GE91BkfV2a4HHJ2WDoB1oaN1/Iq93U0b pLViuagdt2GdbIbJFTvH3V4IANkCWR4iEA2r3DeD4NEZpy/bPgY0wrEGv2uq8vbQ8lrc sQNbL0lHlVNm+Cl6okH/9urblJNrVQ87cis1qqsnxbsz44FigdgIEj52e9VLv9Thw/Gp TkAR+PKwI/pfKc/dQpAcAUSU+KcDdA9aAl2O80p1H6BXWkcaQpcgPtEPRg3QyK3jOLGB M+7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=+xZ858irlb6FpgV9qRqhKqfeu+ijlTKA7VlYstFtMqQ=; b=TeFi95MT2CADixw5t1WxMrqqtp1Xd9ShD1R5BM7y1+IufWOHl5eNdbx8Z6JdFsY5jt /UGsOAYnvigKg9mBY+T1HQ8FxeOOZT6F0KzmaQbvQrIBPgrjejC/j6ICQbKy8AHO5DqL 3XqVdCcK1eehjUCKTBtu86YHixBCA1W/y2TstTNK5npSNFaga2vWPFdA53jqj4WtyN+K 1wzLjpNS3ZxR4q3Tl/+VTjdiLW8ksvT2I8vhqV3SmWk5pItECL9+4W8/G4xc6AjYaiB2 QpyciH/JhCcR9/3WZuJROC5Bb0tTwmCDKuXvmLZE6LkQdqP+WyRC7uMrWZy9syRsbc3J 69ig== X-Gm-Message-State: AOAM530rJu8+oDfshzG5N80dPicP75N1dw/eVMPtOphIumy86xFUQQ1a g7Rkm/AdMHuhe5UZrix7gpadeUgj8hA= X-Google-Smtp-Source: ABdhPJwJdk88Cn1vr8QKJWKIAG58De/q53rJRiGSKfR1+pb1TM0+RUkWMxlhQgca2wfVDwGfHCmucILesuU= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:34c4:7c1d:f9ba:4576]) (user=seanjc job=sendgmr) by 2002:a25:bbd0:: with SMTP id c16mr6531571ybk.23.1614286082831; Thu, 25 Feb 2021 12:48:02 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 25 Feb 2021 12:47:27 -0800 In-Reply-To: <20210225204749.1512652-1-seanjc@google.com> Message-Id: <20210225204749.1512652-3-seanjc@google.com> Mime-Version: 1.0 References: <20210225204749.1512652-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 02/24] KVM: x86/mmu: Check for shadow-present SPTE before querying A/D status From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When updating accessed and dirty bits, check that the new SPTE is present before attempting to query its A/D bits. Failure to confirm the SPTE is present can theoretically cause a false negative, e.g. if a MMIO SPTE replaces a "real" SPTE and somehow the PFNs magically match. Realistically, this is all but guaranteed to be a benign bug. Fix it up primarily so that a future patch can tweak the MMU_WARN_ON checking A/D status to fire if the SPTE is not-present. Fixes: f8e144971c68 ("kvm: x86/mmu: Add access tracking for tdp_mmu") Cc: Ben Gardon Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/tdp_mmu.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index c926c6b899a1..f46972892a2d 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -210,13 +210,12 @@ static int kvm_mmu_page_as_id(struct kvm_mmu_page *sp) static void handle_changed_spte_acc_track(u64 old_spte, u64 new_spte, int level) { - bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte); - if (!is_shadow_present_pte(old_spte) || !is_last_spte(old_spte, level)) return; if (is_accessed_spte(old_spte) && - (!is_accessed_spte(new_spte) || pfn_changed)) + (!is_shadow_present_pte(new_spte) || !is_accessed_spte(new_spte) || + spte_to_pfn(old_spte) != spte_to_pfn(new_spte))) kvm_set_pfn_accessed(spte_to_pfn(old_spte)); } @@ -444,7 +443,7 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, if (was_leaf && is_dirty_spte(old_spte) && - (!is_dirty_spte(new_spte) || pfn_changed)) + (!is_present || !is_dirty_spte(new_spte) || pfn_changed)) kvm_set_pfn_dirty(spte_to_pfn(old_spte)); /* From patchwork Thu Feb 25 20:47:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12104935 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 615A1C433E0 for ; Thu, 25 Feb 2021 20:50:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2C06C64E24 for ; Thu, 25 Feb 2021 20:50:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234081AbhBYUt6 (ORCPT ); Thu, 25 Feb 2021 15:49:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233738AbhBYUtg (ORCPT ); Thu, 25 Feb 2021 15:49:36 -0500 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7ACAEC06178C for ; Thu, 25 Feb 2021 12:48:06 -0800 (PST) Received: by mail-qv1-xf4a.google.com with SMTP id u15so4273550qvo.13 for ; Thu, 25 Feb 2021 12:48:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=R+3qyk75OyMv69ToYBZnoUD1V0Rjb1vbME1iuRO3dNg=; b=H5v2Kf6tbzCtssLZcl4hWSoGFPeWy7/JqV4By6Wycn1UfvewDWxDCg8TxM8YEA/W/d c44KERXz0vSFS3/fZ2ADSsMvbxx0OKIlQD9wIDyaoqqC7jhxSsbX6THRxkfrVuQjny+x QYDGMUsAZo5hpGcjOzeEvD05lWMnz+Q1E0lFczc3P7LnLGfhg8g8utYOCCOgVG91j2RU CAY1XlAedXlAnJh/w2kcePHLdvhXrBsJWTJGKWZTlGoOkJ4h0KvSbyaaCz5K+TPfi02l kricOdkcZ839sAdCjnr5PooTQQ69GAFeHNtmW0P5AV6K4bs/mYPCN1GI6MNeL5u2xRMQ 8M4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=R+3qyk75OyMv69ToYBZnoUD1V0Rjb1vbME1iuRO3dNg=; b=JWfWnlYuD6b75ClzFSM2+HtuJW3BCtejamGgtKvkmZaNc73KXpX7Y+XZ3AkoSBgx/w 5QxptZ/2RMCvUURzK9llw68XtLAu0oECvxXFd1oQDppKY7tw1SKUJCCbLxYSEyykaMkF PBKI57g82Zaft+1sZlfcMVArGcHrZ3oy43H8QHPmXf5bw6CLcZ1L0W43SDs3HQTwnLDy WJovjCvtqsiyninj5Z90NrFGKhvaNRoT6Pups/YiHnVnPKc2y9MsB3VOF93zcGTg/k5g MizolTqX1Zn27uX5J6XynkLgJnd++7A+p+/5xr4d+bqoF13b600loI2Zk7K/X2cet/4G Ca4Q== X-Gm-Message-State: AOAM532lR/fBgpx5KBm96I8xuSEMYlmicnw+RpntDnmFPDOPS0Gsh4FD n2lDvVnEUOWJuOq8u/NbPESSjxOtpx8= X-Google-Smtp-Source: ABdhPJxOyOPXIcCg+rrBJkSmXlc0BLwBqSeKmuqOavbjG8Uj46/f/f07F7rzRgiF+LqY0AZrjl05av9TTXQ= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:34c4:7c1d:f9ba:4576]) (user=seanjc job=sendgmr) by 2002:ad4:496b:: with SMTP id p11mr3307590qvy.33.1614286085594; Thu, 25 Feb 2021 12:48:05 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 25 Feb 2021 12:47:28 -0800 In-Reply-To: <20210225204749.1512652-1-seanjc@google.com> Message-Id: <20210225204749.1512652-4-seanjc@google.com> Mime-Version: 1.0 References: <20210225204749.1512652-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 03/24] KVM: x86/mmu: Bail from fast_page_fault() if SPTE is not shadow-present From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Bail from fast_page_fault() if the SPTE is not a shadow-present SPTE. Functionally, this is not strictly necessary as the !is_access_allowed() check will eventually reject the fast path, but an early check on shadow-present skips unnecessary checks and will allow a future patch to tweak the A/D status auditing to warn if KVM attempts to query A/D bits without first ensuring the SPTE is a shadow-present SPTE. Note, is_shadow_present_pte() is quite expensive at this time, i.e. this might be a net negative in the short term. A future patch will optimize is_shadow_present_pte() to a single AND operation and remedy the issue. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index d75524bc8423..93b0285e8b38 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3061,6 +3061,9 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, if (!is_shadow_present_pte(spte)) break; + if (!is_shadow_present_pte(spte)) + break; + sp = sptep_to_sp(iterator.sptep); if (!is_last_spte(spte, sp->role.level)) break; From patchwork Thu Feb 25 20:47:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12104937 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A920C433DB for ; Thu, 25 Feb 2021 20:51:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 70C4E64E24 for ; Thu, 25 Feb 2021 20:51:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234407AbhBYUur (ORCPT ); Thu, 25 Feb 2021 15:50:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233804AbhBYUtq (ORCPT ); Thu, 25 Feb 2021 15:49:46 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5FBD4C061794 for ; Thu, 25 Feb 2021 12:48:09 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id s187so7591550ybs.22 for ; Thu, 25 Feb 2021 12:48:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=pbM28BHLIyqF+U9LVhQDetEAuxQQswZqlZ+qqRnfJZY=; b=P2ZLY8h8Xh4NvOW/x+fYK9ngo9lS/nkoY5I4NoOtKeNzZC6RhLRNFb0eaF+JTID9ap GBOErYkXcD/12i/vznQpJXkGC0QTe/qUTUUmDP7UyewOQ+x3xhBemn3MkSOZLuXojMd4 iTqVxirAhLka54wKDdjGO1LlT/iIJZXOisni1mEYNy7Nm9Dj1akep1BctMOtSND7hrcc kIZJAHE9/ZOpzVtvG9oPLJvcM6nEfuCY+oGI4wencG4cfLLDmTEfDCPL+2yzLGz6qD2x BFlb3PmQI1gnZvRd2aAACB5TSl8tKqlSurYHj39Jv2Xc7xZM17RW8VfX3dZNBgDTmAek htFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=pbM28BHLIyqF+U9LVhQDetEAuxQQswZqlZ+qqRnfJZY=; b=gXRWM+0/HQujStewIwLWRLQzikqqGYuQsmNCfpE08+Nc+2TzYbDPJ7ekTrW1xfsHgM TeXlGyZBXpRfc7Efzhh9Mco247hBZJ+64CSXb+eKvIeo8jPDbmtbH4sVEhBGRn3bcyGS olr+RskCJuf4WL3/y7cj/cXOdwKQoGFIIExTo/+dQjiU4iNA1xhPayF/rEhH3OGF4JQ7 /uyAJU4rSDXKGiCYQyXrTgeE52uNHbLNM9G14uUbMOcWs9e4jC1TgOErpnPh4nOH80U8 S2H6WYDfl2F5Bsf+viL1HCM26w2KJZYkhTDIz+nvv9ujoZHvIrTklkO2c/l2dysBSBmL G/Cw== X-Gm-Message-State: AOAM530TUiMdoubgiVByskDI1CaTVAI+O3WpoVmsbB4tOvFr/ZmvxZ2h Nigonday1mj/ZmuI4/poK0j8tn0b/Mk= X-Google-Smtp-Source: ABdhPJwTeHoOevHnPLS4CMbZXa+vqrLd+HRLHc6bCi0lKTvxqOECNcALAMhkrjGYhubrMBs3bURPmSJHcks= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:34c4:7c1d:f9ba:4576]) (user=seanjc job=sendgmr) by 2002:a25:c503:: with SMTP id v3mr6741053ybe.397.1614286088591; Thu, 25 Feb 2021 12:48:08 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 25 Feb 2021 12:47:29 -0800 In-Reply-To: <20210225204749.1512652-1-seanjc@google.com> Message-Id: <20210225204749.1512652-5-seanjc@google.com> Mime-Version: 1.0 References: <20210225204749.1512652-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 04/24] KVM: x86/mmu: Disable MMIO caching if MMIO value collides with L1TF From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Disable MMIO caching if the MMIO value collides with the L1TF mitigation that usurps high PFN bits. In practice this should never happen as only CPUs with SME support can generate such a collision (because the MMIO value can theoretically get adjusted into legal memory), and no CPUs exist that support SME and are susceptible to L1TF. But, closing the hole is trivial. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/spte.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index ef55f0bc4ccf..9ea097bcb491 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -245,8 +245,19 @@ u64 mark_spte_for_access_track(u64 spte) void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 access_mask) { BUG_ON((u64)(unsigned)access_mask != access_mask); - WARN_ON(mmio_value & (shadow_nonpresent_or_rsvd_mask << SHADOW_NONPRESENT_OR_RSVD_MASK_LEN)); WARN_ON(mmio_value & shadow_nonpresent_or_rsvd_lower_gfn_mask); + + /* + * Disable MMIO caching if the MMIO value collides with the bits that + * are used to hold the relocated GFN when the L1TF mitigation is + * enabled. This should never fire as there is no known hardware that + * can trigger this condition, e.g. SME/SEV CPUs that require a custom + * MMIO value are not susceptible to L1TF. + */ + if (WARN_ON(mmio_value & (shadow_nonpresent_or_rsvd_mask << + SHADOW_NONPRESENT_OR_RSVD_MASK_LEN))) + mmio_value = 0; + shadow_mmio_value = mmio_value | SPTE_MMIO_MASK; shadow_mmio_access_mask = access_mask; } From patchwork Thu Feb 25 20:47:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12104939 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF43EC433E0 for ; Thu, 25 Feb 2021 20:51:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 84F3264DE9 for ; Thu, 25 Feb 2021 20:51:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229566AbhBYUvP (ORCPT ); Thu, 25 Feb 2021 15:51:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50576 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233820AbhBYUtq (ORCPT ); Thu, 25 Feb 2021 15:49:46 -0500 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A755C0617A7 for ; Thu, 25 Feb 2021 12:48:12 -0800 (PST) Received: by mail-qt1-x84a.google.com with SMTP id 4so1547477qtc.13 for ; Thu, 25 Feb 2021 12:48:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=JwxadR5cbtm2yzkFH/RHMCjjB7zf5B8aQ7otXkvJ48s=; b=C9QD1T5V1WJ4CTYWP1310gEZgpgR0plDlCVznZzCQy5Nm4/4Dow6tXbKlweqgXW3LE Q42BabssrlvmBtzoYJQOBJcLBEHfuicpKTT7ikL0TEVdRoTOLhlmzgM9Net7WJa+b6d5 POmfeGzUSRdE6FLzdbYBVtXTLVcA5QZgkshb2EZhON9iET90LZDkH60qwwN6Un6C/ZQh GjgG9TpDFArxNuWo7HQnOb82iFECBJMvkELxHpk+ac/q0i5w+7JRAy1Wk8hV17ZtfKo9 VQIgoalOK72b9SdUX2y2qVoT+JV57aR4//hjTVztiTVJyjMgxW9suV2mQlFci5CvxNbo h9/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=JwxadR5cbtm2yzkFH/RHMCjjB7zf5B8aQ7otXkvJ48s=; b=uTzw6XFoEtYC5S0sKKDmIsI9mptPrG901affSXBkoFFAfymDgc882ECJUwvj4XmKs5 FQyosbR9y4hz4iJFuZVFgXKjEjjdLu/kDpXrEJFtz6DARuso7CSL+FyBhmYQLWHG6nqM 1RKW7BMLMoL+lLce/Ahftzq7LaPYYmMS6wdipbTdxR+ifjxUfBPyQUY0UG78SPW6g2U+ WYVjDuMRnrmbZtObaXq2yvc+FECJBGRt5wP5cDo/vlmFkYGxC2E5mJMDMTjhR49uR332 fo1YOMtsRuv0uYRwHTd+P2Pn9OpHAXFOMKOanQhFGlGUzFZppUZ1kiurw7mowkwv1xZs wP7g== X-Gm-Message-State: AOAM531BxF/KoLeDIdhNFWaoI30D+8NMn0RlNRfySSYoxaKYCBpQaCIm MJwasJ5CTDyqRAM9qpuzsb6FF+/2IKA= X-Google-Smtp-Source: ABdhPJwfA7FGvLPKnHh7BCxREcyp8iIyRNEkE61CYXrM3B57S/ghI1+CHX4CA/vs+RwgvwpWvp0NF/dAoPA= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:34c4:7c1d:f9ba:4576]) (user=seanjc job=sendgmr) by 2002:a05:6214:1c45:: with SMTP id if5mr4682368qvb.9.1614286091253; Thu, 25 Feb 2021 12:48:11 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 25 Feb 2021 12:47:30 -0800 In-Reply-To: <20210225204749.1512652-1-seanjc@google.com> Message-Id: <20210225204749.1512652-6-seanjc@google.com> Mime-Version: 1.0 References: <20210225204749.1512652-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 05/24] KVM: x86/mmu: Retry page faults that hit an invalid memslot From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Retry page faults (re-enter the guest) that hit an invalid memslot instead of treating the memslot as not existing, i.e. handling the page fault as an MMIO access. When deleting a memslot, SPTEs aren't zapped and the TLBs aren't flushed until after the memslot has been marked invalid. Handling the invalid slot as MMIO means there's a small window where a page fault could replace a valid SPTE with an MMIO SPTE. The legacy MMU handles such a scenario cleanly, but the TDP MMU assumes such behavior is impossible (see the BUG() in __handle_changed_spte()). There's really no good reason why the legacy MMU should allow such a scenario, and closing this hole allows for additional cleanups. Fixes: 2f2fad0897cb ("kvm: x86/mmu: Add functions to handle changed TDP SPTEs") Cc: Ben Gardon Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 93b0285e8b38..9eb5ccb66e31 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3656,6 +3656,14 @@ static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn, struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); bool async; + /* + * Retry the page fault if the gfn hit a memslot that is being deleted + * or moved. This ensures any existing SPTEs for the old memslot will + * be zapped before KVM inserts a new MMIO SPTE for the gfn. + */ + if (slot && (slot->flags & KVM_MEMSLOT_INVALID)) + return true; + /* Don't expose private memslots to L2. */ if (is_guest_mode(vcpu) && !kvm_is_visible_memslot(slot)) { *pfn = KVM_PFN_NOSLOT; From patchwork Thu Feb 25 20:47:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12104951 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6144AC433DB for ; Thu, 25 Feb 2021 20:52:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 135F564DA3 for ; Thu, 25 Feb 2021 20:52:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234568AbhBYUwI (ORCPT ); Thu, 25 Feb 2021 15:52:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50578 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233826AbhBYUtq (ORCPT ); Thu, 25 Feb 2021 15:49:46 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF214C0617A9 for ; Thu, 25 Feb 2021 12:48:14 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id d8so7628062ybs.11 for ; Thu, 25 Feb 2021 12:48:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=GGYOzagFvlgTV2DLbo1famwEs6BMX66WYjRN/mJyFWY=; b=def5dczjvOSFaygZDthc8G/k41Q0O9ImhfMPy0gokapJeX4wWXG0CiMoWynAaidCAb z4w7OttBdvVv7x9NRq76IdfJnc83a7yn5iZkZ+Af38EZRfDdqCx+Hf+tTvQWG9hxMfOr 4zG7XMjtqwqiTgP3trFnwX2nHFX1EsRBOEdfiRg2J+FV4cTAAgRtF/cCs401WQrLo/G+ /g/jBeW4Xopfs1vYrMkGqCJ806IED62PRYqs6BXNiUc69/qGzyy6uRn6nAk+UpX+D6R/ h8KlSD1uWNa5v0L6FjknDlHPAE2zzrUduOIAPUKO32qCWPfGRBILEUC/MUhKE0fHiXzK 8NhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=GGYOzagFvlgTV2DLbo1famwEs6BMX66WYjRN/mJyFWY=; b=iLG4XXCT/zStmrx11efOIEazdGwL/A1i7gfaKURsMxW/6vkcMLnQFQHqolADmZI0S0 CRY9w0X9Cudo25WLFZAPoSjoz51ygsbelUdnq5TGEWcZSwi5cAR7O2ixE/iwrljvy5+t 0yuMJdfMfzXMWH70PMcefB147XjQ0yN1yRn5H5HiWUAhojolRiqSTnHtqB7v4/PwKs/L SX0LgZ/c5QWK9CFsiTjPjujt5pDvNVc7UYp821qbkM71xmew0PrjfmJ8WbX0CwxriNfT PYmJDvZzeuZ317i2oC9tzGwhBtQZPnpVcjpSPmUfHMTG4OFE1U5xp6U14pvpPXgHs1qa 27Jg== X-Gm-Message-State: AOAM531E39Nr2mPlgqgxaPZx0L02OUv/6KqXEQSy223rmoFhRQwj4z3v dRYYQ+0s6L9QUUsTivq7hVT2WIsrrls= X-Google-Smtp-Source: ABdhPJxsRGocUYYsgIXuSfNkZVwB2TL+p6dacMVjtRSo83IsWBrw/MSeItmOSD8NkeYW2maoZl6E/gQekc4= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:34c4:7c1d:f9ba:4576]) (user=seanjc job=sendgmr) by 2002:a25:cbcb:: with SMTP id b194mr6942867ybg.174.1614286094048; Thu, 25 Feb 2021 12:48:14 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 25 Feb 2021 12:47:31 -0800 In-Reply-To: <20210225204749.1512652-1-seanjc@google.com> Message-Id: <20210225204749.1512652-7-seanjc@google.com> Mime-Version: 1.0 References: <20210225204749.1512652-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 06/24] KVM: x86/mmu: Don't install bogus MMIO SPTEs if MMIO caching is disabled From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If MMIO caching is disabled, e.g. when using shadow paging on CPUs with 52 bits of PA space, go straight to MMIO emulation and don't install an MMIO SPTE. The SPTE will just generate a !PRESENT #PF, i.e. can't actually accelerate future MMIO. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 12 +++++++++++- arch/x86/kvm/mmu/spte.c | 7 ++++++- 2 files changed, 17 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 9eb5ccb66e31..37c68abc54b8 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2946,9 +2946,19 @@ static bool handle_abnormal_pfn(struct kvm_vcpu *vcpu, gva_t gva, gfn_t gfn, return true; } - if (unlikely(is_noslot_pfn(pfn))) + if (unlikely(is_noslot_pfn(pfn))) { vcpu_cache_mmio_info(vcpu, gva, gfn, access & shadow_mmio_access_mask); + /* + * If MMIO caching is disabled, emulate immediately without + * touching the shadow page tables as attempting to install an + * MMIO SPTE will just be an expensive nop. + */ + if (unlikely(!shadow_mmio_value)) { + *ret_val = RET_PF_EMULATE; + return true; + } + } return false; } diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 9ea097bcb491..dcba9c1cbe29 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -51,6 +51,8 @@ u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access) u64 mask = generation_mmio_spte_mask(gen); u64 gpa = gfn << PAGE_SHIFT; + WARN_ON_ONCE(!shadow_mmio_value); + access &= shadow_mmio_access_mask; mask |= shadow_mmio_value | access; mask |= gpa | shadow_nonpresent_or_rsvd_mask; @@ -258,7 +260,10 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 access_mask) SHADOW_NONPRESENT_OR_RSVD_MASK_LEN))) mmio_value = 0; - shadow_mmio_value = mmio_value | SPTE_MMIO_MASK; + if (mmio_value) + shadow_mmio_value = mmio_value | SPTE_MMIO_MASK; + else + shadow_mmio_value = 0; shadow_mmio_access_mask = access_mask; } EXPORT_SYMBOL_GPL(kvm_mmu_set_mmio_spte_mask); From patchwork Thu Feb 25 20:47:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12104949 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FE85C433E6 for ; Thu, 25 Feb 2021 20:52:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E96B464E4B for ; Thu, 25 Feb 2021 20:52:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229993AbhBYUvy (ORCPT ); Thu, 25 Feb 2021 15:51:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50586 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233852AbhBYUtq (ORCPT ); Thu, 25 Feb 2021 15:49:46 -0500 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C33CC0617AB for ; Thu, 25 Feb 2021 12:48:17 -0800 (PST) Received: by mail-qv1-xf4a.google.com with SMTP id k4so5230376qvf.8 for ; Thu, 25 Feb 2021 12:48:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=DM1qAEmP5vcSYf3rgtbYxteKXUrb+lcNyYCUDv3yLOQ=; b=jFieCwfy3h6lwGBHiM8luxx6QrR+gEE6E5SrdJyhG1HUzaUWgCNyBY5g5OBj5LoL78 SQ3SNHuUzw+lziT3yp3jICHEgZtR7Q0Od+Mglfp324M4pTprI8vz8uKDXz8se9nywHj8 I7TMwaB+3ElDgNUTqkX7LGxaFWKrAtZr3uDb/hOp/YQieDkqOGg/cBiaKNkfcnp7GptT vJ+WQRvUqNhtENU8c3zF3S87AHqfaw0/mmkJeL+6v++6+Zwp48BWVuJDpGRln28I6GfP BGPydtbj3SZAHzV1Vqm1iwNvDZOyCsQRLva+TDWvU4R8stNQ6C25AsPyxy3Y7/vQ7avr s+Xg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=DM1qAEmP5vcSYf3rgtbYxteKXUrb+lcNyYCUDv3yLOQ=; b=MlCI3N6bsIgCrLGvJe5X15Qu0Y1geJmQ4Lo8b5dN0N/GPI/C5o6v9PoOR3LgSBTrmq YABs6fuSZzmjGEuCs80jr1xFP0iiZmUkKRL9gdiL2COUcsYW7hxGMWCPcM8nNVSFmpzP JFXaGKyQrywrUY7o4IWz1OyXbdNj50nJPQlhUXNzsT7DGS+tLtaGTutgDRRS29k7euuq K0RPoN8mgKwEsyDCo4tXtflnU5Ol7sSxzq7EcXubiJwunoQkMSdVeo5kN43SJM97+Akx PQ1JBPaw2++ahjBARHGMBXv9+dyzzVh345ZBvrhQlU+NO//7RACUwsxpLUQEN5dsUWmn J8DA== X-Gm-Message-State: AOAM532nHKKnqQBcNuY/DkrGlnVV17wpf5+awvFDr74fpV8qAbj9iQqa AJGRBtr7AJeGd9X2NbLz3rdRk1bBM1U= X-Google-Smtp-Source: ABdhPJxT6pBdGVJUan0VMfxpqpDlOWmP38QMUiqE51EoL0OYKpibDAY3S8ka0tGPT9m+ik4osCDk0BYQGb0= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:34c4:7c1d:f9ba:4576]) (user=seanjc job=sendgmr) by 2002:a0c:b526:: with SMTP id d38mr4582364qve.7.1614286096636; Thu, 25 Feb 2021 12:48:16 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 25 Feb 2021 12:47:32 -0800 In-Reply-To: <20210225204749.1512652-1-seanjc@google.com> Message-Id: <20210225204749.1512652-8-seanjc@google.com> Mime-Version: 1.0 References: <20210225204749.1512652-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 07/24] KVM: x86/mmu: Handle MMIO SPTEs directly in mmu_set_spte() From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Now that it should be impossible to convert a valid SPTE to an MMIO SPTE, handle MMIO SPTEs early in mmu_set_spte() without going through set_spte() and all the logic for removing an existing, valid SPTE. The other caller of set_spte(), FNAME(sync_page)(), explicitly handles MMIO SPTEs prior to calling set_spte(). This simplifies mmu_set_spte() and set_spte(), and also "fixes" an oddity where MMIO SPTEs are traced by both trace_kvm_mmu_set_spte() and trace_mark_mmio_spte(). Note, mmu_spte_set() will WARN if this new approach causes KVM to create an MMIO SPTE overtop a valid SPTE. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 22 +++++----------------- 1 file changed, 5 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 37c68abc54b8..4a24beefff94 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -236,17 +236,6 @@ static unsigned get_mmio_spte_access(u64 spte) return spte & shadow_mmio_access_mask; } -static bool set_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn, - kvm_pfn_t pfn, unsigned int access) -{ - if (unlikely(is_noslot_pfn(pfn))) { - mark_mmio_spte(vcpu, sptep, gfn, access); - return true; - } - - return false; -} - static bool check_mmio_spte(struct kvm_vcpu *vcpu, u64 spte) { u64 kvm_gen, spte_gen, gen; @@ -2561,9 +2550,6 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, struct kvm_mmu_page *sp; int ret; - if (set_mmio_spte(vcpu, sptep, gfn, pfn, pte_access)) - return 0; - sp = sptep_to_sp(sptep); ret = make_spte(vcpu, pte_access, level, gfn, pfn, *sptep, speculative, @@ -2593,6 +2579,11 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, pgprintk("%s: spte %llx write_fault %d gfn %llx\n", __func__, *sptep, write_fault, gfn); + if (unlikely(is_noslot_pfn(pfn))) { + mark_mmio_spte(vcpu, sptep, gfn, pte_access); + return RET_PF_EMULATE; + } + if (is_shadow_present_pte(*sptep)) { /* * If we overwrite a PTE page pointer with a 2MB PMD, unlink @@ -2626,9 +2617,6 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, KVM_PAGES_PER_HPAGE(level)); - if (unlikely(is_mmio_spte(*sptep))) - ret = RET_PF_EMULATE; - /* * The fault is fully spurious if and only if the new SPTE and old SPTE * are identical, and emulation is not required. From patchwork Thu Feb 25 20:47:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12104959 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F6FBC433DB for ; Thu, 25 Feb 2021 20:53:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1E45F64E24 for ; Thu, 25 Feb 2021 20:53:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232770AbhBYUxT (ORCPT ); Thu, 25 Feb 2021 15:53:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234282AbhBYUuV (ORCPT ); Thu, 25 Feb 2021 15:50:21 -0500 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E7A0C06121D for ; Thu, 25 Feb 2021 12:48:20 -0800 (PST) Received: by mail-qv1-xf49.google.com with SMTP id t18so5213036qva.6 for ; Thu, 25 Feb 2021 12:48:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=hmkVhLp1GC7NIyuf9NPmUl90mCViucnjMMJ9e8WzMHo=; b=dUYOs8ob4aIG9pQFtyyXzH3bzJ8rsFLrxxUirvqk6GDqVEdmKd6dCmw4myHH+sjNYc 6B5GbjL3IAlA3gXpv6nY52kevEkoZtwUDCNGGYV90DL6f3uP9sQzi2K219qXf64aH5Tx +c1Cq1S2rr1TbFIeNLDSiXCUgK5keT4DONM+65FFVqqBubgKeyulZj4DrXU9y77PaYW6 7vZuPRPktSaomGM7NSWI0iHJtPgTWbn3amdZsmoKKakaqBTkQw818B2ZPRAX2d2Tle9F ftnDWHPiDP+tN6S877e49DPrjHuFv1NxrdWtjaBWNCHpfniEfi10b/DEiotfkwUFWt63 KJlg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=hmkVhLp1GC7NIyuf9NPmUl90mCViucnjMMJ9e8WzMHo=; b=qsZmF3b943hYviSrA74XOd39UHEJuj/l0ajJhsxYvWPzY8xSIhTmVbwJGy+Fqe75xw VH+K5QqCbMaMPzCfZkz2TdUSUiBK0A/Td/ziIubdPupQ3DC9PUNgDiAOHV0lisMLqLNx 9smAaeSj6eUxf3w2K52OWSmZY3CUE4eM0RuUN6F34RIAL0PG61L8fIe1nK0fJOMutFSq 4BEd5aRPLlVCaq0hunBts9mbCa3rXhcIu8ArZ0FseSNfhPDTGde4nTJ1C2+HoXO/xouY VvSKL9DhGaUdpzZuHWf7H8eUbmsEy+xMiVF8n77LP4Tbf7eu3E/OjUVB8jC/8Rf6CCyY sWew== X-Gm-Message-State: AOAM532VrN/Zc2upmgYXuIuckAdSU6sOgeh8+zlginbkQVswEsQNwJGC OWhEx1d9KXRMsWpwMiNdjYqXKisnlwE= X-Google-Smtp-Source: ABdhPJwsArN9RBH6GJNh6qdRsI1Sg9jrkAfcEjBj31C2XnPtrMZuIAoQMw2x/HhJKBeQUdH5kqOVRVtWaNE= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:34c4:7c1d:f9ba:4576]) (user=seanjc job=sendgmr) by 2002:a05:6214:d6d:: with SMTP id 13mr4706063qvs.60.1614286099247; Thu, 25 Feb 2021 12:48:19 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 25 Feb 2021 12:47:33 -0800 In-Reply-To: <20210225204749.1512652-1-seanjc@google.com> Message-Id: <20210225204749.1512652-9-seanjc@google.com> Mime-Version: 1.0 References: <20210225204749.1512652-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 08/24] KVM: x86/mmu: Drop redundant trace_kvm_mmu_set_spte() in the TDP MMU From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Remove TDP MMU's call to trace_kvm_mmu_set_spte() that is done for both shadow-present SPTEs and MMIO SPTEs. It's fully redundant for the former, and unnecessary for the latter. This aligns TDP MMU tracing behavior with that of the legacy MMU. Fixes: 33dd3574f5fe ("kvm: x86/mmu: Add existing trace points to TDP MMU") Cc: Ben Gardon Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/tdp_mmu.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index f46972892a2d..782cae1eb5e1 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -773,12 +773,11 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, int write, trace_mark_mmio_spte(rcu_dereference(iter->sptep), iter->gfn, new_spte); ret = RET_PF_EMULATE; - } else + } else { trace_kvm_mmu_set_spte(iter->level, iter->gfn, rcu_dereference(iter->sptep)); + } - trace_kvm_mmu_set_spte(iter->level, iter->gfn, - rcu_dereference(iter->sptep)); if (!prefault) vcpu->stat.pf_fixed++; From patchwork Thu Feb 25 20:47:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12104955 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59AB8C433E0 for ; Thu, 25 Feb 2021 20:53:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1FB8A64E24 for ; Thu, 25 Feb 2021 20:53:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234684AbhBYUwh (ORCPT ); Thu, 25 Feb 2021 15:52:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234302AbhBYUuW (ORCPT ); Thu, 25 Feb 2021 15:50:22 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A0C65C061221 for ; Thu, 25 Feb 2021 12:48:22 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id f81so7561326yba.8 for ; Thu, 25 Feb 2021 12:48:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=F0eh9ylPJtSN4o5lYqN1SeY0/rJOFsJfLsPIo1Pupz8=; b=SFlQq5dFvBgj2J0+Jqc8nAq1vFDpNsrCAgZdo7CnGM0JUX9pgq6RC2kR/H4KF51R4H BF2igNp4At4zh5epKOeIkGLVHnxJB3ZS5RxI16DKPLlB/GntMUL89QrQ6LpItibaIVj4 V9BuS5W/eEXyK2osPIIix51xjoZqt6t8TrOD3LLjFV6bSunArGSL6wM9XRe6WVd+x27J kan84s1LjotT4s1OAwkI+Gh4Xtmc6q50W06A5PfSvYb3GUJto0LvThi5CnaD+jqEZCQV plwyIJKaa492WzerGv5jrHiCpS2HriyP5K8fSjF9uVxYrQPbu6gPwQ8X82sPlNKe5ImW q6GA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=F0eh9ylPJtSN4o5lYqN1SeY0/rJOFsJfLsPIo1Pupz8=; b=ruxt6BBQBQACveiT6L7jWizaZhyW7t9561OmEgBJjPwSX3ay5waxL8gqx95nEAZ5jF 70grYxY+iL5zqSZZ9DDtlPkMmHwBY6686ww34X5N6KBTpNm8EPrGZKqkWT5zyGFR6JdK lLnAbckBORnRIx3OxgZltYE09bep+SsipZeeRZXnnXBl9kEvZHmMQCluotSJFkowZ3Kv Cv6yZhLleybaVPhG34PoxBUsG/Evdnz+MnxnHp5h/92giWJcz2ZkCxrgWTIQS3eEcAbO to0n3EewNp0nMNP+yQPCxHjSCUv8Ndkr1y8pkslt3sYjHa7KR9acAct7Ljm+EojbNePZ ZgdA== X-Gm-Message-State: AOAM531u4FsXFSjHTwTUE3iW/FWirUeTPzaEyC3XQ1WaQD1yE66JqW7g YeZtSy+IvT3wUjNlema5Kp/X67BxpeE= X-Google-Smtp-Source: ABdhPJx3w04ypNexMcAliwGECnbLKlW0y3JTQsT0wE66V6WzKtiGvpxXGgLwGA12LK/s+jUC+eaKDpDhjAg= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:34c4:7c1d:f9ba:4576]) (user=seanjc job=sendgmr) by 2002:a25:aae2:: with SMTP id t89mr7485713ybi.63.1614286101914; Thu, 25 Feb 2021 12:48:21 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 25 Feb 2021 12:47:34 -0800 In-Reply-To: <20210225204749.1512652-1-seanjc@google.com> Message-Id: <20210225204749.1512652-10-seanjc@google.com> Mime-Version: 1.0 References: <20210225204749.1512652-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 09/24] KVM: x86/mmu: Rename 'mask' to 'spte' in MMIO SPTE helpers From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The value returned by make_mmio_spte() is a SPTE, it is not a mask. Name it accordingly. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 6 +++--- arch/x86/kvm/mmu/spte.c | 10 +++++----- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4a24beefff94..ced412f90b7d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -215,10 +215,10 @@ bool is_nx_huge_page_enabled(void) static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn, unsigned int access) { - u64 mask = make_mmio_spte(vcpu, gfn, access); + u64 spte = make_mmio_spte(vcpu, gfn, access); - trace_mark_mmio_spte(sptep, gfn, mask); - mmu_spte_set(sptep, mask); + trace_mark_mmio_spte(sptep, gfn, spte); + mmu_spte_set(sptep, spte); } static gfn_t get_mmio_spte_gfn(u64 spte) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index dcba9c1cbe29..e4ef3267f9ac 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -48,18 +48,18 @@ static u64 generation_mmio_spte_mask(u64 gen) u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access) { u64 gen = kvm_vcpu_memslots(vcpu)->generation & MMIO_SPTE_GEN_MASK; - u64 mask = generation_mmio_spte_mask(gen); + u64 spte = generation_mmio_spte_mask(gen); u64 gpa = gfn << PAGE_SHIFT; WARN_ON_ONCE(!shadow_mmio_value); access &= shadow_mmio_access_mask; - mask |= shadow_mmio_value | access; - mask |= gpa | shadow_nonpresent_or_rsvd_mask; - mask |= (gpa & shadow_nonpresent_or_rsvd_mask) + spte |= shadow_mmio_value | access; + spte |= gpa | shadow_nonpresent_or_rsvd_mask; + spte |= (gpa & shadow_nonpresent_or_rsvd_mask) << SHADOW_NONPRESENT_OR_RSVD_MASK_LEN; - return mask; + return spte; } static bool kvm_is_mmio_pfn(kvm_pfn_t pfn) From patchwork Thu Feb 25 20:47:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12104957 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4B66C433E0 for ; Thu, 25 Feb 2021 20:53:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AD19064DE9 for ; Thu, 25 Feb 2021 20:53:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234790AbhBYUxJ (ORCPT ); Thu, 25 Feb 2021 15:53:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234305AbhBYUuW (ORCPT ); Thu, 25 Feb 2021 15:50:22 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D460C061222 for ; Thu, 25 Feb 2021 12:48:25 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id f127so7527058ybf.12 for ; Thu, 25 Feb 2021 12:48:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=vlCRN623gt7WQ4CgADd84aZE0eu5FTtehsLF+FT0cRA=; b=ghOmLvcrZVthjGf0A+WfKMW7VLqxkfQ5HfZK84uCDcKf+GNxCaNwVmClQC6CvMjTYQ R+63XORdE7+aH0V5aJqzeJB4E93oPDFaffOlZ3WGu17pFbhxWpwCpGtKOsf1QrVICYRe 6b1cFx+oLZn+g0sPEIUXdtZb6sBBN1R+bPrsTlH6pL+3jljZcGP8biDq37+TYCj62Ag9 3rOAOaTBD45lE9ZlI4+wvFQiAm9o7ntU6v0EZ4XVgme3wtus24fd11hzVkqGk3qg99F/ nurJKzHXXAGlbQfChPtejQ0b9hH5/epr1LqgbCJE1Q8MkNLjC8juiTOEJYxzJczQ1z/J 7hZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=vlCRN623gt7WQ4CgADd84aZE0eu5FTtehsLF+FT0cRA=; b=ihVh0ibczCrXeQLawwTxCvdkAnaPrL8blmhEqXhsGndFzwv8gfkKHFtr3dn+I2zLqP tJsbCCehl+4WDkzxH2pksYEPm4Q1W8AFDSgI0UD8mmooSAyFeYD/xu6eImGKqgC1Foxl O+X9t4/bM8FszAEsfuFFOMgnvTOv98+q1Q41vf/7wqom3fPcsUS3KBza0P0h8573A0IY P/823rlb5mliqNJGDe2uiQPis+47m88E1GvB2h8XoDqBqug+yU+g3vNSpy5MT5/PL73d IloH2MzyqqBPYjN2S/sFJ/P24J47G9onu6ApktmHRmnb/TSMdbGLpLMQnGxhFYOVn7VZ a5ww== X-Gm-Message-State: AOAM532ctAZleZTQWcR5gjNnlkExR2j1v0A7B4ieeRUx6oGVqhLLXvgK bhGLhyCL+JGsmBEujhOxa9s7IiE3nJg= X-Google-Smtp-Source: ABdhPJy7Mv43HPZwyY2zTeimmWraFim1Yy296hnyS10CZBgDUOOk+n5MdkAvQgFB3SfnPLUUiB3fgmOJXz8= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:34c4:7c1d:f9ba:4576]) (user=seanjc job=sendgmr) by 2002:a25:bc4f:: with SMTP id d15mr6702552ybk.41.1614286104714; Thu, 25 Feb 2021 12:48:24 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 25 Feb 2021 12:47:35 -0800 In-Reply-To: <20210225204749.1512652-1-seanjc@google.com> Message-Id: <20210225204749.1512652-11-seanjc@google.com> Mime-Version: 1.0 References: <20210225204749.1512652-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 10/24] KVM: x86/mmu: Stop using software available bits to denote MMIO SPTEs From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Stop tagging MMIO SPTEs with specific available bits and instead detect MMIO SPTEs by checking for their unique SPTE value. The value is guaranteed to be unique on shadow paging and NPT as setting reserved physical address bits on any other type of SPTE would consistute a KVM bug. Ditto for EPT, as creating a WX non-MMIO would also be a bug. Note, this approach is also future-compatibile with TDX, which will need to reflect MMIO EPT violations as #VEs into the guest. To create an EPT violation instead of a misconfig, TDX EPTs will need to have RWX=0, But, MMIO SPTEs will also be the only case where KVM clears SUPPRESS_VE, so MMIO SPTEs will still be guaranteed to have a unique value within a given MMU context. The main motivation is to make it easier to reason about which types of SPTEs use which available bits. As a happy side effect, this frees up two more bits for storing the MMIO generation. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu.h | 2 +- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/spte.c | 11 ++++++----- arch/x86/kvm/mmu/spte.h | 10 ++++------ arch/x86/kvm/svm/svm.c | 2 +- arch/x86/kvm/vmx/vmx.c | 3 ++- 6 files changed, 15 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index c68bfc3e2402..00f4a541e04d 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -59,7 +59,7 @@ static __always_inline u64 rsvd_bits(int s, int e) return ((2ULL << (e - s)) - 1) << s; } -void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 access_mask); +void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask); void reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ced412f90b7d..f92571b786a2 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5726,7 +5726,7 @@ static void kvm_set_mmio_spte_mask(void) else mask = 0; - kvm_mmu_set_mmio_spte_mask(mask, ACC_WRITE_MASK | ACC_USER_MASK); + kvm_mmu_set_mmio_spte_mask(mask, mask, ACC_WRITE_MASK | ACC_USER_MASK); } static bool get_nx_auto_mode(void) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index e4ef3267f9ac..b2379094a8c1 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -23,6 +23,7 @@ u64 __read_mostly shadow_user_mask; u64 __read_mostly shadow_accessed_mask; u64 __read_mostly shadow_dirty_mask; u64 __read_mostly shadow_mmio_value; +u64 __read_mostly shadow_mmio_mask; u64 __read_mostly shadow_mmio_access_mask; u64 __read_mostly shadow_present_mask; u64 __read_mostly shadow_me_mask; @@ -163,6 +164,7 @@ int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, spte = mark_spte_for_access_track(spte); out: + WARN_ON(is_mmio_spte(spte)); *new_spte = spte; return ret; } @@ -244,7 +246,7 @@ u64 mark_spte_for_access_track(u64 spte) return spte; } -void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 access_mask) +void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask) { BUG_ON((u64)(unsigned)access_mask != access_mask); WARN_ON(mmio_value & shadow_nonpresent_or_rsvd_lower_gfn_mask); @@ -260,10 +262,9 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 access_mask) SHADOW_NONPRESENT_OR_RSVD_MASK_LEN))) mmio_value = 0; - if (mmio_value) - shadow_mmio_value = mmio_value | SPTE_MMIO_MASK; - else - shadow_mmio_value = 0; + WARN_ON((mmio_value & mmio_mask) != mmio_value); + shadow_mmio_value = mmio_value; + shadow_mmio_mask = mmio_mask; shadow_mmio_access_mask = access_mask; } EXPORT_SYMBOL_GPL(kvm_mmu_set_mmio_spte_mask); diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 6de3950fd704..642a17b9964c 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -8,15 +8,11 @@ #define PT_FIRST_AVAIL_BITS_SHIFT 10 #define PT64_SECOND_AVAIL_BITS_SHIFT 54 -/* - * The mask used to denote special SPTEs, which can be either MMIO SPTEs or - * Access Tracking SPTEs. - */ +/* The mask used to denote Access Tracking SPTEs. Note, val=3 is available. */ #define SPTE_SPECIAL_MASK (3ULL << 52) #define SPTE_AD_ENABLED_MASK (0ULL << 52) #define SPTE_AD_DISABLED_MASK (1ULL << 52) #define SPTE_AD_WRPROT_ONLY_MASK (2ULL << 52) -#define SPTE_MMIO_MASK (3ULL << 52) #ifdef CONFIG_DYNAMIC_PHYSICAL_MASK #define PT64_BASE_ADDR_MASK (physical_mask & ~(u64)(PAGE_SIZE-1)) @@ -98,6 +94,7 @@ extern u64 __read_mostly shadow_user_mask; extern u64 __read_mostly shadow_accessed_mask; extern u64 __read_mostly shadow_dirty_mask; extern u64 __read_mostly shadow_mmio_value; +extern u64 __read_mostly shadow_mmio_mask; extern u64 __read_mostly shadow_mmio_access_mask; extern u64 __read_mostly shadow_present_mask; extern u64 __read_mostly shadow_me_mask; @@ -167,7 +164,8 @@ extern u8 __read_mostly shadow_phys_bits; static inline bool is_mmio_spte(u64 spte) { - return (spte & SPTE_SPECIAL_MASK) == SPTE_MMIO_MASK; + return (spte & shadow_mmio_mask) == shadow_mmio_value && + likely(shadow_mmio_value); } static inline bool sp_ad_disabled(struct kvm_mmu_page *sp) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index c4f2f2f6b945..54610270f66a 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -885,7 +885,7 @@ static __init void svm_adjust_mmio_mask(void) */ mask = (mask_bit < 52) ? rsvd_bits(mask_bit, 51) | PT_PRESENT_MASK : 0; - kvm_mmu_set_mmio_spte_mask(mask, PT_WRITABLE_MASK | PT_USER_MASK); + kvm_mmu_set_mmio_spte_mask(mask, mask, PT_WRITABLE_MASK | PT_USER_MASK); } static void svm_hardware_teardown(void) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 908f7a8af064..8a8423a97f13 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4320,7 +4320,8 @@ static void ept_set_mmio_spte_mask(void) * EPT Misconfigurations can be generated if the value of bits 2:0 * of an EPT paging-structure entry is 110b (write/execute). */ - kvm_mmu_set_mmio_spte_mask(VMX_EPT_MISCONFIG_WX_VALUE, 0); + kvm_mmu_set_mmio_spte_mask(VMX_EPT_MISCONFIG_WX_VALUE, + VMX_EPT_RWX_MASK, 0); } #define VMX_XSS_EXIT_BITMAP 0 From patchwork Thu Feb 25 20:47:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12104961 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5157DC433E0 for ; Thu, 25 Feb 2021 20:54:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1BB5364DE9 for ; Thu, 25 Feb 2021 20:54:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231810AbhBYUxc (ORCPT ); Thu, 25 Feb 2021 15:53:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234314AbhBYUuZ (ORCPT ); Thu, 25 Feb 2021 15:50:25 -0500 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EDDD0C061224 for ; Thu, 25 Feb 2021 12:48:27 -0800 (PST) Received: by mail-qt1-x849.google.com with SMTP id t5so5033166qti.5 for ; Thu, 25 Feb 2021 12:48:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=3dJX6PNMOpcnlHjtSKGQWdqsgIFYjgow6196OCl7MyI=; b=RY/+cgnBFr699vd/bgYh3VMSoW0aWmYd5U1FT3PNuuKefp5rnFrYcoUcX8rYTpRvhs glQbsL/36w91AWnajZorhypk/30W1X/Tkxb+ykpNSk7IqCRM9wCPbF3y7MgBoy/Usdri fHIQDGUfDRCskexycPUbHMo3WVZi80WW3Rn5RssXXP40un3H3H4L/WJ0PtH0jtlLvCmJ WAP8RbhUeRxlxI82KgYcqTludxwM7NoP2AvbLs71+PtABjc759aEWJjYAtINK0nZvQxB GynmqnhT1836myMc2SYzx+6hMr0btJXwnLl2xdl0JUfzxtib//nHn2yjUILucrvWi/fT jtkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=3dJX6PNMOpcnlHjtSKGQWdqsgIFYjgow6196OCl7MyI=; b=EH93TAQzo5VcQi8omyxH+7W53CAHdVRYguYALlAPNHl46n8NB7zdMXM7lao1KAQLd9 6Ntix+MWufLP3UsGggllf94l5a+EqBlFck9RmcGbvhSuRTB6cuZ0RZlG2VixbBezg1v+ wKeOK6fd8QzTkvhHXdCgMiymI8AjAi62qiHCkKB/DFqT7cEpGkZ1RVuOqVWW/z3BDwar 8+DsPFKVT66Fa3/OuboMfP66gLaNVoFINoaupWojdlaXc/H8XeMBLSGl5/dSJlPF50Iq hFbKzTOw8zllHQdb/eVY/sVwGrseZEYPojQ6bJUL2eYPVI1ybwGtvEEQ4wtiWu1YhJLZ V6RA== X-Gm-Message-State: AOAM532yT31Lwyf4OQIV4xDa27MDEaVImlMp3EZqg9I71BMYtFisrUSk FW85Cxovcf7LdBGHdkE+z5xwHiVzt9w= X-Google-Smtp-Source: ABdhPJzdjCcgPrgt//zVCx2hyGk4lfsR4fSPcWuIze0KimRoHEfUHafbuOkWChinNzu8YToJbTWFzAMttR4= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:34c4:7c1d:f9ba:4576]) (user=seanjc job=sendgmr) by 2002:a0c:85c2:: with SMTP id o60mr4568108qva.11.1614286107161; Thu, 25 Feb 2021 12:48:27 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 25 Feb 2021 12:47:36 -0800 In-Reply-To: <20210225204749.1512652-1-seanjc@google.com> Message-Id: <20210225204749.1512652-12-seanjc@google.com> Mime-Version: 1.0 References: <20210225204749.1512652-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 11/24] KVM: x86/mmu: Add module param to disable MMIO caching (for testing) From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a module param to disable MMIO caching so that it's possible to test the related flows without access to the necessary hardware. Using shadow paging with 64-bit KVM and 52 bits of physical address space must disable MMIO caching as there are no reserved bits to be had. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/spte.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index b2379094a8c1..503dec3f8c7a 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -17,6 +17,9 @@ #include +static bool __read_mostly enable_mmio_caching = true; +module_param_named(mmio_caching, enable_mmio_caching, bool, 0444); + u64 __read_mostly shadow_nx_mask; u64 __read_mostly shadow_x_mask; /* mutual exclusive with nx_mask */ u64 __read_mostly shadow_user_mask; @@ -251,6 +254,9 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask) BUG_ON((u64)(unsigned)access_mask != access_mask); WARN_ON(mmio_value & shadow_nonpresent_or_rsvd_lower_gfn_mask); + if (!enable_mmio_caching) + mmio_value = 0; + /* * Disable MMIO caching if the MMIO value collides with the bits that * are used to hold the relocated GFN when the L1TF mitigation is From patchwork Thu Feb 25 20:47:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12104963 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F3ADC433DB for ; Thu, 25 Feb 2021 20:54:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 381CD64DA3 for ; Thu, 25 Feb 2021 20:54:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233597AbhBYUyE (ORCPT ); Thu, 25 Feb 2021 15:54:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234358AbhBYUuf (ORCPT ); Thu, 25 Feb 2021 15:50:35 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD01AC061226 for ; Thu, 25 Feb 2021 12:48:30 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id a63so7643094yba.2 for ; Thu, 25 Feb 2021 12:48:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=CStveBKOEZIJIuu0d29PcoMsbDwro7qz1nDfW0Tyl2s=; b=DeIPOSTh5jTvcOHDAOsTHxIsbsJUCUg9SYNF479Xcm0IjuOKIRiqzI+dRIclQJY4I/ 59EbcpFa92utRPo8I+dh33vNkxfsMfM42m5+nRoJ2hA4ofJN4QAaldzUavKj7eu+U3/J vbIuv9Mb8amcgyoR9L4NsqRQVOPKXxW64qKhtf9TE4RdweoSwjvEwn+bDhJYJbIqoUiC 9E2PtbFGedDwj6nRoeR9LY6caxe+vs3wtcRtzR5pgHxAt3Ael07XZ+R6NUKSxMHeEyNR cmMap0umW0+Jpghl+ENc3jZXR5q+0DDGp3/S5UIhItKC05EFcKBYuYm9AvZjGU+vy02e lr0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=CStveBKOEZIJIuu0d29PcoMsbDwro7qz1nDfW0Tyl2s=; b=l8TReuLMOI0oXuiHURxtI/0GS6DpAChSpnMbatlM9P+QuDPcj1WR6Z6HCDvwB//rM1 PbJ+QBLjFn8zw8ZtMNAxBn2+GdHK8E9P79UYDyPxibMdd4MjQ247+HVTCdsWqlcyj8cW JI2Y2GIYPmGAzX0r4m2OTyNDeItnLwiY6OHo3WZ7RmM2hqdWlvclMQNMNf76kSw0Z7oZ N4whN+vxek6/GpHRuPcw+kTWSkvetf+qL5udT9WaEYJnhKlpgRUsuOckuYBYb+K+AiJZ lvMSMykfyjDkfEg3acRHIQ4a4YApxm1cu4hElRikSNJk/ne/5VunVTlywTuCGEL5MOG5 031Q== X-Gm-Message-State: AOAM5333v/fkleJnsdo/o2qWcm1eNpxUobFWHFguxN0kQkps3fqE/2xd weI5s5+T+w8xnwBnloR/8oC9pgzvm/o= X-Google-Smtp-Source: ABdhPJzeXKlrJOe/IPappNVx1l6zaaiJ4tcWs9xEBvZDyOG6iQDDa1JpG82lHTnqQBzQOa42grLFkzRI7dM= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:34c4:7c1d:f9ba:4576]) (user=seanjc job=sendgmr) by 2002:a25:f0b:: with SMTP id 11mr5204604ybp.208.1614286110008; Thu, 25 Feb 2021 12:48:30 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 25 Feb 2021 12:47:37 -0800 In-Reply-To: <20210225204749.1512652-1-seanjc@google.com> Message-Id: <20210225204749.1512652-13-seanjc@google.com> Mime-Version: 1.0 References: <20210225204749.1512652-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 12/24] KVM: x86/mmu: Rename and document A/D scheme for TDP SPTEs From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Rename the various A/D status defines to explicitly associated them with TDP. There is a subtle dependency on the bits in question never being set when using PAE paging, as those bits are reserved, not available. I.e. using these bits outside of TDP (technically EPT) would cause explosions. No functional change intended. Signed-off-by: Sean Christopherson --- Documentation/virt/kvm/locking.rst | 37 +++++++++++++++--------------- arch/x86/kvm/mmu/spte.c | 17 ++++++++++---- arch/x86/kvm/mmu/spte.h | 34 ++++++++++++++++++++------- 3 files changed, 56 insertions(+), 32 deletions(-) diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst index 0aa4817b466d..85876afe0441 100644 --- a/Documentation/virt/kvm/locking.rst +++ b/Documentation/virt/kvm/locking.rst @@ -38,12 +38,11 @@ the mmu-lock on x86. Currently, the page fault can be fast in one of the following two cases: 1. Access Tracking: The SPTE is not present, but it is marked for access - tracking i.e. the SPTE_SPECIAL_MASK is set. That means we need to - restore the saved R/X bits. This is described in more detail later below. + tracking. That means we need to restore the saved R/X bits. This is + described in more detail later below. -2. Write-Protection: The SPTE is present and the fault is - caused by write-protect. That means we just need to change the W bit of - the spte. +2. Write-Protection: The SPTE is present and the fault is caused by + write-protect. That means we just need to change the W bit of the spte. What we use to avoid all the race is the SPTE_HOST_WRITEABLE bit and SPTE_MMU_WRITEABLE bit on the spte: @@ -54,9 +53,9 @@ SPTE_MMU_WRITEABLE bit on the spte: page write-protection. On fast page fault path, we will use cmpxchg to atomically set the spte W -bit if spte.SPTE_HOST_WRITEABLE = 1 and spte.SPTE_WRITE_PROTECT = 1, or -restore the saved R/X bits if VMX_EPT_TRACK_ACCESS mask is set, or both. This -is safe because whenever changing these bits can be detected by cmpxchg. +bit if spte.SPTE_HOST_WRITEABLE = 1 and spte.SPTE_WRITE_PROTECT = 1, to +restore the saved R/X bits if for an access-traced spte, or both. This is +safe because whenever changing these bits can be detected by cmpxchg. But we need carefully check these cases: @@ -185,17 +184,17 @@ See the comments in spte_has_volatile_bits() and mmu_spte_update(). Lockless Access Tracking: This is used for Intel CPUs that are using EPT but do not support the EPT A/D -bits. In this case, when the KVM MMU notifier is called to track accesses to a -page (via kvm_mmu_notifier_clear_flush_young), it marks the PTE as not-present -by clearing the RWX bits in the PTE and storing the original R & X bits in -some unused/ignored bits. In addition, the SPTE_SPECIAL_MASK is also set on the -PTE (using the ignored bit 62). When the VM tries to access the page later on, -a fault is generated and the fast page fault mechanism described above is used -to atomically restore the PTE to a Present state. The W bit is not saved when -the PTE is marked for access tracking and during restoration to the Present -state, the W bit is set depending on whether or not it was a write access. If -it wasn't, then the W bit will remain clear until a write access happens, at -which time it will be set using the Dirty tracking mechanism described above. +bits. In this case, PTEs are tagged as A/D disabled (using ignored bits), and +when the KVM MMU notifier is called to track accesses to a page (via +kvm_mmu_notifier_clear_flush_young), it marks the PTE not-present in hardware +by clearing the RWX bits in the PTE and storing the original R & X bits in more +unused/ignored bits. When the VM tries to access the page later on, a fault is +generated and the fast page fault mechanism described above is used to +atomically restore the PTE to a Present state. The W bit is not saved when the +PTE is marked for access tracking and during restoration to the Present state, +the W bit is set depending on whether or not it was a write access. If it +wasn't, then the W bit will remain clear until a write access happens, at which +time it will be set using the Dirty tracking mechanism described above. 3. Reference ------------ diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 503dec3f8c7a..3eaf143b7d12 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -42,7 +42,7 @@ static u64 generation_mmio_spte_mask(u64 gen) u64 mask; WARN_ON(gen & ~MMIO_SPTE_GEN_MASK); - BUILD_BUG_ON((MMIO_SPTE_GEN_HIGH_MASK | MMIO_SPTE_GEN_LOW_MASK) & SPTE_SPECIAL_MASK); + BUILD_BUG_ON((MMIO_SPTE_GEN_HIGH_MASK | MMIO_SPTE_GEN_LOW_MASK) & SPTE_TDP_AD_MASK); mask = (gen << MMIO_SPTE_GEN_LOW_SHIFT) & MMIO_SPTE_GEN_LOW_MASK; mask |= (gen << MMIO_SPTE_GEN_HIGH_SHIFT) & MMIO_SPTE_GEN_HIGH_MASK; @@ -96,9 +96,16 @@ int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, int ret = 0; if (ad_disabled) - spte |= SPTE_AD_DISABLED_MASK; + spte |= SPTE_TDP_AD_DISABLED_MASK; else if (kvm_vcpu_ad_need_write_protect(vcpu)) - spte |= SPTE_AD_WRPROT_ONLY_MASK; + spte |= SPTE_TDP_AD_WRPROT_ONLY_MASK; + + /* + * Bits 62:52 of PAE SPTEs are reserved. WARN if said bits are set + * if PAE paging may be employed (shadow paging or any 32-bit KVM). + */ + WARN_ON_ONCE((!tdp_enabled || !IS_ENABLED(CONFIG_X86_64)) && + (spte & SPTE_TDP_AD_MASK)); /* * For the EPT case, shadow_present_mask is 0 if hardware @@ -180,7 +187,7 @@ u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled) shadow_user_mask | shadow_x_mask | shadow_me_mask; if (ad_disabled) - spte |= SPTE_AD_DISABLED_MASK; + spte |= SPTE_TDP_AD_DISABLED_MASK; else spte |= shadow_accessed_mask; @@ -288,7 +295,7 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask, { BUG_ON(!dirty_mask != !accessed_mask); BUG_ON(!accessed_mask && !acc_track_mask); - BUG_ON(acc_track_mask & SPTE_SPECIAL_MASK); + BUG_ON(acc_track_mask & SPTE_TDP_AD_MASK); shadow_user_mask = user_mask; shadow_accessed_mask = accessed_mask; diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 642a17b9964c..fd0a7911f098 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -8,11 +8,24 @@ #define PT_FIRST_AVAIL_BITS_SHIFT 10 #define PT64_SECOND_AVAIL_BITS_SHIFT 54 -/* The mask used to denote Access Tracking SPTEs. Note, val=3 is available. */ -#define SPTE_SPECIAL_MASK (3ULL << 52) -#define SPTE_AD_ENABLED_MASK (0ULL << 52) -#define SPTE_AD_DISABLED_MASK (1ULL << 52) -#define SPTE_AD_WRPROT_ONLY_MASK (2ULL << 52) +/* + * TDP SPTES (more specifically, EPT SPTEs) may not have A/D bits, and may also + * be restricted to using write-protection (for L2 when CPU dirty logging, i.e. + * PML, is enabled). Use bits 52 and 53 to hold the type of A/D tracking that + * is must be employed for a given TDP SPTE. + * + * Note, the "enabled" mask must be '0', as bits 62:52 are _reserved_ for PAE + * paging, including NPT PAE. This scheme works because legacy shadow paging + * is guaranteed to have A/D bits and write-protection is forced only for + * TDP with CPU dirty logging (PML). If NPT ever gains PML-like support, it + * must be restricted to 64-bit KVM. + */ +#define SPTE_TDP_AD_SHIFT 52 +#define SPTE_TDP_AD_MASK (3ULL << SPTE_TDP_AD_SHIFT) +#define SPTE_TDP_AD_ENABLED_MASK (0ULL << SPTE_TDP_AD_SHIFT) +#define SPTE_TDP_AD_DISABLED_MASK (1ULL << SPTE_TDP_AD_SHIFT) +#define SPTE_TDP_AD_WRPROT_ONLY_MASK (2ULL << SPTE_TDP_AD_SHIFT) +static_assert(SPTE_TDP_AD_ENABLED_MASK == 0); #ifdef CONFIG_DYNAMIC_PHYSICAL_MASK #define PT64_BASE_ADDR_MASK (physical_mask & ~(u64)(PAGE_SIZE-1)) @@ -100,7 +113,7 @@ extern u64 __read_mostly shadow_present_mask; extern u64 __read_mostly shadow_me_mask; /* - * SPTEs used by MMUs without A/D bits are marked with SPTE_AD_DISABLED_MASK; + * SPTEs in MMUs without A/D bits are marked with SPTE_TDP_AD_DISABLED_MASK; * shadow_acc_track_mask is the set of bits to be cleared in non-accessed * pages. */ @@ -176,13 +189,18 @@ static inline bool sp_ad_disabled(struct kvm_mmu_page *sp) static inline bool spte_ad_enabled(u64 spte) { MMU_WARN_ON(is_mmio_spte(spte)); - return (spte & SPTE_SPECIAL_MASK) != SPTE_AD_DISABLED_MASK; + return (spte & SPTE_TDP_AD_MASK) != SPTE_TDP_AD_DISABLED_MASK; } static inline bool spte_ad_need_write_protect(u64 spte) { MMU_WARN_ON(is_mmio_spte(spte)); - return (spte & SPTE_SPECIAL_MASK) != SPTE_AD_ENABLED_MASK; + /* + * This is benign for non-TDP SPTEs as SPTE_TDP_AD_ENABLED_MASK is '0', + * and non-TDP SPTEs will never set these bits. Optimize for 64-bit + * TDP and do the A/D type check unconditionally. + */ + return (spte & SPTE_TDP_AD_MASK) != SPTE_TDP_AD_ENABLED_MASK; } static inline u64 spte_shadow_accessed_mask(u64 spte) From patchwork Thu Feb 25 20:47:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12104953 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89FEAC433E9 for ; Thu, 25 Feb 2021 20:52:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5643064DE9 for ; Thu, 25 Feb 2021 20:52:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234600AbhBYUwU (ORCPT ); Thu, 25 Feb 2021 15:52:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50584 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234220AbhBYUuN (ORCPT ); Thu, 25 Feb 2021 15:50:13 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B6015C0611C2 for ; Thu, 25 Feb 2021 12:48:33 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id j4so7502442ybt.23 for ; Thu, 25 Feb 2021 12:48:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=iKJLs+velK3tbbbqG24O5VyKgRTy3wHLluGHdOvzpgU=; b=c16m5fP+SpCoY8+Sz3VL1bRlpB0IOoWxnO0jmB37hGhNGm2KgtoNzrIye/2rFrSAt8 KhBU9Q9IloU6akt/33FAi29Elcb+fD7Ldf3WQWpKeayx/39ZbGeeLnVXYTOJqi+qjOJw xHGGweXg+BWHNZvy6ML1iAKUTK9y5c9Jv3NtsLgD+DV0ueb5W9MJX9OcMNIgBnFVfaVA 3FLemBZ0WblYjTtAa3Mxavr/gGaEvm+xn53jy2B6Vh8U2lQ4oMuxZvT6+4mGKVrtazWl KEMcpAI1Pf90DZIqpxlarn52e+A8PAZ8jDvKHd6UE7gd3oaRRVzp64/u/bnhYS0ovZwb sw7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=iKJLs+velK3tbbbqG24O5VyKgRTy3wHLluGHdOvzpgU=; b=ZpW/5ZEM6fbKxEcG76oCEjiccuWUvkLsXiJw/ajMwjUpSIhupy24sBH9KmRvuYPXkW JrafFcDsm+kXIJpUcRjVCIW20F0RybUGm36KMvPZK1JRYKpAes24wNOdGiZhJEkxEpRA hNvLn9TvKIm4daj8chIhHdGBs+3k+JPyXj+s3to0W42kj7v0/0/A9npycdwBDceJzAa/ iK66Cj6JBrMq/aHJNmW4ZwPJTMggnCCBbfF80najY1LlMNutwO/D3RCcg/4gVxy1saq4 12QBlH6tgs6bEhSSXzTmj5sIoX5walObBHrQ1VRiINNOpW87jEpEkOLYSZtdRra2Wa2g x+GQ== X-Gm-Message-State: AOAM533q1WoJhi1Ydil/bszcSx0Je0iH3DXgLkndthwytDrCbaZWIpmF A/L8/ojbmqOb0KecoMZTRnlD1VwVscA= X-Google-Smtp-Source: ABdhPJyxdfNw7gAbChySrqKHw9zv/yCfvcjoe75Yr2KYX6LuA2A8TRL9xhG7JHcEgHCraEL9ypNV3Ztjmy8= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:34c4:7c1d:f9ba:4576]) (user=seanjc job=sendgmr) by 2002:a25:482:: with SMTP id 124mr6891292ybe.315.1614286112926; Thu, 25 Feb 2021 12:48:32 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 25 Feb 2021 12:47:38 -0800 In-Reply-To: <20210225204749.1512652-1-seanjc@google.com> Message-Id: <20210225204749.1512652-14-seanjc@google.com> Mime-Version: 1.0 References: <20210225204749.1512652-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 13/24] KVM: x86/mmu: Use MMIO SPTE bits 53 and 52 for the MMIO generation From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use bits 53 and 52 for the MMIO generation now that they're not used to identify MMIO SPTEs. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/spte.c | 1 - arch/x86/kvm/mmu/spte.h | 8 ++++---- 2 files changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 3eaf143b7d12..cf0e20b34cd3 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -42,7 +42,6 @@ static u64 generation_mmio_spte_mask(u64 gen) u64 mask; WARN_ON(gen & ~MMIO_SPTE_GEN_MASK); - BUILD_BUG_ON((MMIO_SPTE_GEN_HIGH_MASK | MMIO_SPTE_GEN_LOW_MASK) & SPTE_TDP_AD_MASK); mask = (gen << MMIO_SPTE_GEN_LOW_SHIFT) & MMIO_SPTE_GEN_LOW_MASK; mask |= (gen << MMIO_SPTE_GEN_HIGH_SHIFT) & MMIO_SPTE_GEN_HIGH_MASK; diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index fd0a7911f098..bf4f49890606 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -65,11 +65,11 @@ static_assert(SPTE_TDP_AD_ENABLED_MASK == 0); #define SPTE_MMU_WRITEABLE (1ULL << (PT_FIRST_AVAIL_BITS_SHIFT + 1)) /* - * Due to limited space in PTEs, the MMIO generation is a 18 bit subset of + * Due to limited space in PTEs, the MMIO generation is a 20 bit subset of * the memslots generation and is derived as follows: * * Bits 0-8 of the MMIO generation are propagated to spte bits 3-11 - * Bits 9-17 of the MMIO generation are propagated to spte bits 54-62 + * Bits 9-19 of the MMIO generation are propagated to spte bits 52-62 * * The KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS flag is intentionally not included in * the MMIO generation number, as doing so would require stealing a bit from @@ -82,7 +82,7 @@ static_assert(SPTE_TDP_AD_ENABLED_MASK == 0); #define MMIO_SPTE_GEN_LOW_START 3 #define MMIO_SPTE_GEN_LOW_END 11 -#define MMIO_SPTE_GEN_HIGH_START PT64_SECOND_AVAIL_BITS_SHIFT +#define MMIO_SPTE_GEN_HIGH_START 52 #define MMIO_SPTE_GEN_HIGH_END 62 #define MMIO_SPTE_GEN_LOW_MASK GENMASK_ULL(MMIO_SPTE_GEN_LOW_END, \ @@ -94,7 +94,7 @@ static_assert(SPTE_TDP_AD_ENABLED_MASK == 0); #define MMIO_SPTE_GEN_HIGH_BITS (MMIO_SPTE_GEN_HIGH_END - MMIO_SPTE_GEN_HIGH_START + 1) /* remember to adjust the comment above as well if you change these */ -static_assert(MMIO_SPTE_GEN_LOW_BITS == 9 && MMIO_SPTE_GEN_HIGH_BITS == 9); +static_assert(MMIO_SPTE_GEN_LOW_BITS == 9 && MMIO_SPTE_GEN_HIGH_BITS == 11); #define MMIO_SPTE_GEN_LOW_SHIFT (MMIO_SPTE_GEN_LOW_START - 0) #define MMIO_SPTE_GEN_HIGH_SHIFT (MMIO_SPTE_GEN_HIGH_START - MMIO_SPTE_GEN_LOW_BITS) From patchwork Thu Feb 25 20:47:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12104967 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE3E5C433E9 for ; Thu, 25 Feb 2021 20:54:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8698364DA3 for ; Thu, 25 Feb 2021 20:54:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234849AbhBYUya (ORCPT ); Thu, 25 Feb 2021 15:54:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234361AbhBYUuj (ORCPT ); Thu, 25 Feb 2021 15:50:39 -0500 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B839C0611C3 for ; Thu, 25 Feb 2021 12:48:36 -0800 (PST) Received: by mail-qv1-xf49.google.com with SMTP id t18so5213563qva.6 for ; Thu, 25 Feb 2021 12:48:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=XrhTobC3OEejIHUBLuboOaA6T5Q73z3PUoVSkeTI1do=; b=aOzJdenBeIwRZ3lWzSYRNWm/xIW0vfl+EgOdNvwi6R8AnIq/ZzSlUdSGeRrUx95KxJ 2yPhrNwzMATkLxCIxAmov2eVFFUYHT6HXT4qXkgHrPAi4Oa/tDNWqTNAN3DR8v7kxZMv JxalCH8pAWAfRAXu+6QAQYPv4AqRVYbAMiojyHqaQjnwHHAD7m/B0LPrT1g3wfYKx9et V7S6wNYBao4hKs132T4TqsIK4H//1Cdm42pgf2bdDiocrlrh2Blw6G7gED2+3LE8BUZU dqGIXksnz5qjUfQ42JaHilDFhG4W3H/QwD92DokDgeMoeNe7AQZEw5sIBRPX8FbHGy8w 1ylg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=XrhTobC3OEejIHUBLuboOaA6T5Q73z3PUoVSkeTI1do=; b=eKSmTM0u8VC5cHAYI5rMsZTjMqoQQOAOQlYzoafp6sT01CekvTPgehg9AlJ0CAncLF gZ15aOx6AjZTlDjJkQtsbaclvvQOPwMxX6dGEyUZnNyvpIziFlPB1OAwXjul9JwvwG/I FsB57f2yXQ8+UfG1ZhlC8U8ma8Rm+9jnT0v2+e+YE/QHGyhxFbkn9+NborwbRc+uLd1H yQLAiKSqK/bd+Zmr2wsmuo/vOAaTwz/cGt+cMs0yhj+Eqgj4hhbVNN/vWIYdleEzC+Xp 62KMHdMvbkVCYSQn/x2usPQthG1/UCVqj+21kTy0eao5gQdsMgCQltsxaBoiHG/My/GX ZIaw== X-Gm-Message-State: AOAM5325Vk71UaTXe5hz9MYaUZHfwG92MCpiU2Mg1mN5AVl7xNHVSJS7 +1jL73qCr10G3syGdktSuLebrmxBydo= X-Google-Smtp-Source: ABdhPJzmHrWxgypzPvNxUR+MzZsthqv2F8rLtCNZ7vvIfW28XqAAHBWf1aaOed/HfQ3WRJnGQ/a0POQ71/s= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:34c4:7c1d:f9ba:4576]) (user=seanjc job=sendgmr) by 2002:a0c:ea87:: with SMTP id d7mr4583465qvp.27.1614286115712; Thu, 25 Feb 2021 12:48:35 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 25 Feb 2021 12:47:39 -0800 In-Reply-To: <20210225204749.1512652-1-seanjc@google.com> Message-Id: <20210225204749.1512652-15-seanjc@google.com> Mime-Version: 1.0 References: <20210225204749.1512652-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 14/24] KVM: x86/mmu: Document dependency bewteen TDP A/D type and saved bits From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Document that SHADOW_ACC_TRACK_SAVED_BITS_SHIFT is directly dependent on bits 53:52 being used to track the A/D type. Remove PT64_SECOND_AVAIL_BITS_SHIFT as it is at best misleading, and at worst wrong. For PAE paging, which arguably is a variant of PT64, the bits are reserved. For MMIO SPTEs the bits are not available as they're used for the MMIO generation. For access tracked SPTEs, they are also not available as bits 56:54 are used to store the original RX bits. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/spte.h | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index bf4f49890606..e918b8f0b21d 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -6,7 +6,6 @@ #include "mmu_internal.h" #define PT_FIRST_AVAIL_BITS_SHIFT 10 -#define PT64_SECOND_AVAIL_BITS_SHIFT 54 /* * TDP SPTES (more specifically, EPT SPTEs) may not have A/D bits, and may also @@ -134,11 +133,14 @@ extern u64 __read_mostly shadow_nonpresent_or_rsvd_mask; * The mask/shift to use for saving the original R/X bits when marking the PTE * as not-present for access tracking purposes. We do not save the W bit as the * PTEs being access tracked also need to be dirty tracked, so the W bit will be - * restored only when a write is attempted to the page. + * restored only when a write is attempted to the page. This mask obviously + * must not overlap the A/D type mask. */ #define SHADOW_ACC_TRACK_SAVED_BITS_MASK (PT64_EPT_READABLE_MASK | \ PT64_EPT_EXECUTABLE_MASK) -#define SHADOW_ACC_TRACK_SAVED_BITS_SHIFT PT64_SECOND_AVAIL_BITS_SHIFT +#define SHADOW_ACC_TRACK_SAVED_BITS_SHIFT 54 +static_assert(!(SPTE_TDP_AD_MASK & (SHADOW_ACC_TRACK_SAVED_BITS_MASK << + SHADOW_ACC_TRACK_SAVED_BITS_SHIFT))); /* * If a thread running without exclusive control of the MMU lock must perform a From patchwork Thu Feb 25 20:47:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12104965 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93ACFC433DB for ; Thu, 25 Feb 2021 20:54:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 60C1D64E24 for ; Thu, 25 Feb 2021 20:54:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233127AbhBYUyT (ORCPT ); Thu, 25 Feb 2021 15:54:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50864 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234372AbhBYUuj (ORCPT ); Thu, 25 Feb 2021 15:50:39 -0500 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6BEFEC061574 for ; Thu, 25 Feb 2021 12:48:39 -0800 (PST) Received: by mail-qv1-xf49.google.com with SMTP id k4so5231070qvf.8 for ; Thu, 25 Feb 2021 12:48:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=YR3IY+WMIpueE9LEL4pIDCdHoqUGSZWq7L1hAAqFLng=; b=jBqducYQLQxfzHyUVhCuE4Vp0SKkakWZ+oC++eZkn8uS/jxPlOdEs+vMheTPU2AG6b ZhBKE4f+ZOgab6OIi4GQqzLQhpzVj6xdifdKaJYkPnLiiomQdqdbQtOJyyR3TrMi79if w/bDxpbZ5pW6fcHtwUYslyp0dHrVpTd47U+dCAviH4lk/h35kaheyvbO0F7T1yEQ2bOA kyPMvx0z8psbNcNo9cg6SH8dpjGLWIETgD/G5yStwjsI1G5aQHppW0kB4mZ5JuQaFPMZ nZRvLNAbwvuLcRTW8TWLoCeCWjMYDR/NWuzZI2MeDMhg0fgQEuaaNZyWsjVYUPGrSfOz 9LPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=YR3IY+WMIpueE9LEL4pIDCdHoqUGSZWq7L1hAAqFLng=; b=eAs2YP3CvqcrOIomJIoqwaSz2CrXO1Mx7s984tXN/ecBsaJOlJEXJe3WhnZC9ElL61 JicaiPklh7fU1lrDN/Lzu7Ved/PMYLZsBRZlMn/unWVj8Q6c/yfzZf7Avj9oNkQLIDPk syVX0kCD9pXnKCLmt1xL7rDtqx4Sxfu0Mq5S9gD4gzVrdYLsqoUUmJ99o826rqvrxvSt yrz+IK95g1YdSHU8OdZqmOT7iEoordTr0VJnIEnAZc2MuaRjpsdnil7jdoZkqN9eHgXK BpPgrsMx9h7vRpl+Qgwp9oyLKZRbZmpp16KAikRRRX3jNlJ2wuC9fEpSAWe8F+qoxmtY vDVA== X-Gm-Message-State: AOAM530yniTCV9jkpuCFi0MmoNm3kUKIQZFtiUmx/OmPCBD8J+JrKiDT wB/YPgX1+6xELCIktcp3HLI3PWOaZnU= X-Google-Smtp-Source: ABdhPJxIKiq0H74ngKlInfkQTqzfMe0U08Nx04qQaqATl1OtRM8fhF4Hm8/8MLItTALiezdi7txUKtsYkFQ= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:34c4:7c1d:f9ba:4576]) (user=seanjc job=sendgmr) by 2002:a0c:f0d3:: with SMTP id d19mr4680835qvl.15.1614286118575; Thu, 25 Feb 2021 12:48:38 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 25 Feb 2021 12:47:40 -0800 In-Reply-To: <20210225204749.1512652-1-seanjc@google.com> Message-Id: <20210225204749.1512652-16-seanjc@google.com> Mime-Version: 1.0 References: <20210225204749.1512652-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 15/24] KVM: x86/mmu: Move initial kvm_mmu_set_mask_ptes() call into MMU proper From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move kvm_mmu_set_mask_ptes() into mmu.c as prep for future cleanup of the mask initialization code. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 4 ++++ arch/x86/kvm/x86.c | 3 --- 2 files changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f92571b786a2..99d9c85a1820 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5796,6 +5796,10 @@ int kvm_mmu_module_init(void) kvm_set_mmio_spte_mask(); + kvm_mmu_set_mask_ptes(PT_USER_MASK, PT_ACCESSED_MASK, + PT_DIRTY_MASK, PT64_NX_MASK, 0, + PT_PRESENT_MASK, 0, sme_me_mask); + pte_list_desc_cache = kmem_cache_create("pte_list_desc", sizeof(struct pte_list_desc), 0, SLAB_ACCOUNT, NULL); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index c1b7bdf47e7e..5a27468c6afa 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8024,9 +8024,6 @@ int kvm_arch_init(void *opaque) if (r) goto out_free_percpu; - kvm_mmu_set_mask_ptes(PT_USER_MASK, PT_ACCESSED_MASK, - PT_DIRTY_MASK, PT64_NX_MASK, 0, - PT_PRESENT_MASK, 0, sme_me_mask); kvm_timer_init(); perf_register_guest_info_callbacks(&kvm_guest_cbs); From patchwork Thu Feb 25 20:47:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12104973 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E053BC433E0 for ; Thu, 25 Feb 2021 20:55:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A595964E28 for ; Thu, 25 Feb 2021 20:55:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234781AbhBYUzQ (ORCPT ); Thu, 25 Feb 2021 15:55:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234220AbhBYUwY (ORCPT ); Thu, 25 Feb 2021 15:52:24 -0500 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2B774C0611BD for ; Thu, 25 Feb 2021 12:48:42 -0800 (PST) Received: by mail-qv1-xf4a.google.com with SMTP id u15so4274764qvo.13 for ; Thu, 25 Feb 2021 12:48:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=/7xTLuHtPMz8ccvejSO8UR7nmAyPLxW40HeF3dwYprM=; b=PvLXG6psi++BpRCK3IkTPXetFp0jysKlfLyaWeoTic9vanP3TXYHw6fieclyhZKyDX 7VpymOPuRluvJA9Gi/U9p2hjyJsgE6eK9H1mF2gZ/pGLidE1yjCTSNfjBWZwFp1ZTsnt 9XRTnF/rUlE2Ug4Y+oGAJsDIINuafW+YUIzppzjB3rJ8xhDqd1Cy3+dUQU9ysgOCA1zh 5RY5jliebVJ8nfqkKc1/WzsmqCydgoTmpbdLYeuy8Wnqp3MvzgytQbZ1v9g4dU0Psgtp wxYrsT0KkH7z08F2C2tswBvyxY7G72iUkJraqpIeqolpMKIivCGRyqSuktQm27iAAePO uJqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=/7xTLuHtPMz8ccvejSO8UR7nmAyPLxW40HeF3dwYprM=; b=VcY6QODgymiMA/VAeSjT6XNJJgMoplWx2hM0RthD3m7QcywxOBxbXCJQM5mLY8ahu4 i/92xds+2xVefmb2SuG7wex5F/hscAnPw1v4ymsj/5NRTHcm56feVxxUWqAopvnfFdKP CrbPMLe1/BzHb4g5yLbXvGNUUBFU4hDFZWkk7kua1Ooj2hOtYrXzxcC8PJvdbAXbaLIh IimIXZh5HnkURxtDuLMQfjMriNn3vJqvw3xkshBesC/rpIod3Am5zJyCEOCkceIS2zpg If+76FOe4os0U34O55qsp0J7PBaWrNzK2YNaPy71yi4QlwJuvRZkKu/syUJzvJnvEE00 Ujtg== X-Gm-Message-State: AOAM531iPY7R2lT/mLzAkUvud+zNNVzLMR5DW5QCwVwB8bFImQrywuK8 Z/Jc6KgbmxdixRVjgLIqcti7cFoO3go= X-Google-Smtp-Source: ABdhPJyXy6fkpXq9oB+zOCMM1oxTbSWmXHUo16dC71mvM//Fc0kPmwc4KpG+HwQTn14mFqBVaYqrt3oq7Zw= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:34c4:7c1d:f9ba:4576]) (user=seanjc job=sendgmr) by 2002:ad4:53ac:: with SMTP id j12mr4752602qvv.3.1614286121398; Thu, 25 Feb 2021 12:48:41 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 25 Feb 2021 12:47:41 -0800 In-Reply-To: <20210225204749.1512652-1-seanjc@google.com> Message-Id: <20210225204749.1512652-17-seanjc@google.com> Mime-Version: 1.0 References: <20210225204749.1512652-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 16/24] KVM: x86/mmu: Co-locate code for setting various SPTE masks From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Squish all the code for (re)setting the various SPTE masks into one location. With the split code, it's not at all clear that the masks are set once during module initialization. This will allow a future patch to clean up initialization of the masks without shuffling code all over tarnation. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 25 ------------------------- arch/x86/kvm/mmu/spte.c | 19 +++++++++++++++++++ arch/x86/kvm/vmx/vmx.c | 17 ++++++----------- 3 files changed, 25 insertions(+), 36 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 99d9c85a1820..1fb500db46e0 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5710,25 +5710,6 @@ static void mmu_destroy_caches(void) kmem_cache_destroy(mmu_page_header_cache); } -static void kvm_set_mmio_spte_mask(void) -{ - u64 mask; - - /* - * Set a reserved PA bit in MMIO SPTEs to generate page faults with - * PFEC.RSVD=1 on MMIO accesses. 64-bit PTEs (PAE, x86-64, and EPT - * paging) support a maximum of 52 bits of PA, i.e. if the CPU supports - * 52-bit physical addresses then there are no reserved PA bits in the - * PTEs and so the reserved PA approach must be disabled. - */ - if (shadow_phys_bits < 52) - mask = BIT_ULL(51) | PT_PRESENT_MASK; - else - mask = 0; - - kvm_mmu_set_mmio_spte_mask(mask, mask, ACC_WRITE_MASK | ACC_USER_MASK); -} - static bool get_nx_auto_mode(void) { /* Return true when CPU has the bug, and mitigations are ON */ @@ -5794,12 +5775,6 @@ int kvm_mmu_module_init(void) kvm_mmu_reset_all_pte_masks(); - kvm_set_mmio_spte_mask(); - - kvm_mmu_set_mask_ptes(PT_USER_MASK, PT_ACCESSED_MASK, - PT_DIRTY_MASK, PT64_NX_MASK, 0, - PT_PRESENT_MASK, 0, sme_me_mask); - pte_list_desc_cache = kmem_cache_create("pte_list_desc", sizeof(struct pte_list_desc), 0, SLAB_ACCOUNT, NULL); diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index cf0e20b34cd3..b15d6006dbee 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -310,6 +310,7 @@ EXPORT_SYMBOL_GPL(kvm_mmu_set_mask_ptes); void kvm_mmu_reset_all_pte_masks(void) { u8 low_phys_bits; + u64 mask; shadow_user_mask = 0; shadow_accessed_mask = 0; @@ -344,4 +345,22 @@ void kvm_mmu_reset_all_pte_masks(void) shadow_nonpresent_or_rsvd_lower_gfn_mask = GENMASK_ULL(low_phys_bits - 1, PAGE_SHIFT); + + /* + * Set a reserved PA bit in MMIO SPTEs to generate page faults with + * PFEC.RSVD=1 on MMIO accesses. 64-bit PTEs (PAE, x86-64, and EPT + * paging) support a maximum of 52 bits of PA, i.e. if the CPU supports + * 52-bit physical addresses then there are no reserved PA bits in the + * PTEs and so the reserved PA approach must be disabled. + */ + if (shadow_phys_bits < 52) + mask = BIT_ULL(51) | PT_PRESENT_MASK; + else + mask = 0; + + kvm_mmu_set_mmio_spte_mask(mask, mask, ACC_WRITE_MASK | ACC_USER_MASK); + + kvm_mmu_set_mask_ptes(PT_USER_MASK, PT_ACCESSED_MASK, + PT_DIRTY_MASK, PT64_NX_MASK, 0, + PT_PRESENT_MASK, 0, sme_me_mask); } diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 8a8423a97f13..730076b3832f 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4314,16 +4314,6 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx) vmx->secondary_exec_control = exec_control; } -static void ept_set_mmio_spte_mask(void) -{ - /* - * EPT Misconfigurations can be generated if the value of bits 2:0 - * of an EPT paging-structure entry is 110b (write/execute). - */ - kvm_mmu_set_mmio_spte_mask(VMX_EPT_MISCONFIG_WX_VALUE, - VMX_EPT_RWX_MASK, 0); -} - #define VMX_XSS_EXIT_BITMAP 0 /* @@ -5462,7 +5452,12 @@ static void vmx_enable_tdp(void) cpu_has_vmx_ept_execute_only() ? 0ull : VMX_EPT_READABLE_MASK, VMX_EPT_RWX_MASK, 0ull); - ept_set_mmio_spte_mask(); + /* + * EPT Misconfigurations can be generated if the value of bits 2:0 + * of an EPT paging-structure entry is 110b (write/execute). + */ + kvm_mmu_set_mmio_spte_mask(VMX_EPT_MISCONFIG_WX_VALUE, + VMX_EPT_RWX_MASK, 0); } /* From patchwork Thu Feb 25 20:47:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12104969 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE06CC433DB for ; Thu, 25 Feb 2021 20:54:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 915C764DA3 for ; Thu, 25 Feb 2021 20:54:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232290AbhBYUyu (ORCPT ); Thu, 25 Feb 2021 15:54:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234112AbhBYUwM (ORCPT ); Thu, 25 Feb 2021 15:52:12 -0500 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DAA55C061786 for ; Thu, 25 Feb 2021 12:48:44 -0800 (PST) Received: by mail-qk1-x74a.google.com with SMTP id y64so4130755qkc.7 for ; Thu, 25 Feb 2021 12:48:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=OnI6w6icsmTuNVXFI6cQ8RuOMnt7x3dOFjkRcgEIyN4=; b=aeDxs/m+PaufMKC82eLzi6Rr37DzXsrwS8A86hV3eKFh/ITch19q0xJHEfXhdwtWI+ zUeuVjrBJ2ufZYUHLmp9xxwpm8RRTLxxBZ8SMyGT9jhrYxWE1Ezx+6k+YNs+xoNflAeM 6IL+mbsLGwO0IUN8N2sGXmD4cONrVWsYnJc8ItRus2EejcB+RFeceZ7PUu0aqKmcNK4y hAFgkRowoUKN40qboF5pe0X2PZ2Zs+e9CD7NjKPrVfAwDdLBhyWhKAhMY1WqbMpA6qWJ Qsuoo3tpzjPu/MT30LNVSzHc/jgPJk+cw47Q3vjh2aCaU52A/z1Yn30R0nGGjvxl1O0x AuzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=OnI6w6icsmTuNVXFI6cQ8RuOMnt7x3dOFjkRcgEIyN4=; b=j2Nc6h6KsF6C7PAI8DOjTv23Fj+5atlN1ZCsizhfT3YUxzVWVh2z5+ve3/64Hsl8qB v3H7I0NtPO+JhbS0Uc73IJEkHZZkBreGavbvMUPdIWSTToZoMKtGNb2Nz/r+oWto4bTg mKlGLzLplSu2g+e5mGXhM+Hdx9IFWXx6Si2B+8ciH/QChWTTb83LnLB+sK/5aLEiv/5n cZ842E7cUekKCXNHOxyXKJV828kUF2ge8glRx2sYbNuYquGEzixnAxfov3gSsh2H/o2f hjoqgUEM7i+BKqV0RFtv6rPtf4DWulswg/pXCT2CqfZ5QFhyzPlJ7wQhGpDdH/DU2StN wo7g== X-Gm-Message-State: AOAM532hbMFINY6HHvY2Jiw3ftxVXGaOaVNg22wXqRlY1evAzyFoBTxq XuRH2bBbZk78Ehjla84fsljQBLlwdio= X-Google-Smtp-Source: ABdhPJwrjLlGlIPjHktPX5MROQiVD8AZ5jez8jaHsq3LbJkwdzRz9Q2ba+0ECJFRupG6TM+PN0drsy1rOeg= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:34c4:7c1d:f9ba:4576]) (user=seanjc job=sendgmr) by 2002:ad4:5ec9:: with SMTP id jm9mr4681101qvb.56.1614286123922; Thu, 25 Feb 2021 12:48:43 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 25 Feb 2021 12:47:42 -0800 In-Reply-To: <20210225204749.1512652-1-seanjc@google.com> Message-Id: <20210225204749.1512652-18-seanjc@google.com> Mime-Version: 1.0 References: <20210225204749.1512652-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 17/24] KVM: x86/mmu: Move logic for setting SPTE masks for EPT into the MMU proper From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Let the MMU deal with the SPTE masks to avoid splitting the logic and knowledge across the MMU and VMX. The SPTE masks that are used for EPT are very, very tightly coupled to the MMU implementation. The use of available bits, the existence of A/D types, the fact that shadow_x_mask even exists, and so on and so forth are all baked into the MMU implementation. Cross referencing the params to the masks is also a nightmare, as pretty much every param is a u64. A future patch will make the location of the MMU_WRITABLE and HOST_WRITABLE bits MMU specific, to free up bit 11 for a MMU_PRESENT bit. Doing that change with the current kvm_mmu_set_mask_ptes() would be an absolute mess. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 3 -- arch/x86/kvm/mmu.h | 1 + arch/x86/kvm/mmu/spte.c | 60 ++++++++++++++------------------- arch/x86/kvm/vmx/vmx.c | 20 ++--------- 4 files changed, 29 insertions(+), 55 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index cc376327a168..629f74f2a00a 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1407,9 +1407,6 @@ void kvm_mmu_destroy(struct kvm_vcpu *vcpu); int kvm_mmu_create(struct kvm_vcpu *vcpu); void kvm_mmu_init_vm(struct kvm *kvm); void kvm_mmu_uninit_vm(struct kvm *kvm); -void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask, - u64 dirty_mask, u64 nx_mask, u64 x_mask, u64 p_mask, - u64 acc_track_mask, u64 me_mask); void kvm_mmu_reset_context(struct kvm_vcpu *vcpu); void kvm_mmu_slot_remove_write_access(struct kvm *kvm, diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 00f4a541e04d..11cf7793cfee 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -60,6 +60,7 @@ static __always_inline u64 rsvd_bits(int s, int e) } void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask); +void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only); void reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context); diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index b15d6006dbee..ac5ea6fda969 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -16,6 +16,7 @@ #include "spte.h" #include +#include static bool __read_mostly enable_mmio_caching = true; module_param_named(mmio_caching, enable_mmio_caching, bool, 0444); @@ -281,45 +282,31 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask) } EXPORT_SYMBOL_GPL(kvm_mmu_set_mmio_spte_mask); -/* - * Sets the shadow PTE masks used by the MMU. - * - * Assumptions: - * - Setting either @accessed_mask or @dirty_mask requires setting both - * - At least one of @accessed_mask or @acc_track_mask must be set - */ -void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask, - u64 dirty_mask, u64 nx_mask, u64 x_mask, u64 p_mask, - u64 acc_track_mask, u64 me_mask) +void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only) { - BUG_ON(!dirty_mask != !accessed_mask); - BUG_ON(!accessed_mask && !acc_track_mask); - BUG_ON(acc_track_mask & SPTE_TDP_AD_MASK); + shadow_user_mask = VMX_EPT_READABLE_MASK; + shadow_accessed_mask = has_ad_bits ? VMX_EPT_ACCESS_BIT : 0ull; + shadow_dirty_mask = has_ad_bits ? VMX_EPT_DIRTY_BIT : 0ull; + shadow_nx_mask = 0ull; + shadow_x_mask = VMX_EPT_EXECUTABLE_MASK; + shadow_present_mask = has_exec_only ? 0ull : VMX_EPT_READABLE_MASK; + shadow_acc_track_mask = VMX_EPT_RWX_MASK; + shadow_me_mask = 0ull; - shadow_user_mask = user_mask; - shadow_accessed_mask = accessed_mask; - shadow_dirty_mask = dirty_mask; - shadow_nx_mask = nx_mask; - shadow_x_mask = x_mask; - shadow_present_mask = p_mask; - shadow_acc_track_mask = acc_track_mask; - shadow_me_mask = me_mask; + /* + * EPT Misconfigurations are generated if the value of bits 2:0 + * of an EPT paging-structure entry is 110b (write/execute). + */ + kvm_mmu_set_mmio_spte_mask(VMX_EPT_MISCONFIG_WX_VALUE, + VMX_EPT_RWX_MASK, 0); } -EXPORT_SYMBOL_GPL(kvm_mmu_set_mask_ptes); +EXPORT_SYMBOL_GPL(kvm_mmu_set_ept_masks); void kvm_mmu_reset_all_pte_masks(void) { u8 low_phys_bits; u64 mask; - shadow_user_mask = 0; - shadow_accessed_mask = 0; - shadow_dirty_mask = 0; - shadow_nx_mask = 0; - shadow_x_mask = 0; - shadow_present_mask = 0; - shadow_acc_track_mask = 0; - shadow_phys_bits = kvm_get_shadow_phys_bits(); /* @@ -346,6 +333,15 @@ void kvm_mmu_reset_all_pte_masks(void) shadow_nonpresent_or_rsvd_lower_gfn_mask = GENMASK_ULL(low_phys_bits - 1, PAGE_SHIFT); + shadow_user_mask = PT_USER_MASK; + shadow_accessed_mask = PT_ACCESSED_MASK; + shadow_dirty_mask = PT_DIRTY_MASK; + shadow_nx_mask = PT64_NX_MASK; + shadow_x_mask = 0; + shadow_present_mask = PT_PRESENT_MASK; + shadow_acc_track_mask = 0; + shadow_me_mask = sme_me_mask; + /* * Set a reserved PA bit in MMIO SPTEs to generate page faults with * PFEC.RSVD=1 on MMIO accesses. 64-bit PTEs (PAE, x86-64, and EPT @@ -359,8 +355,4 @@ void kvm_mmu_reset_all_pte_masks(void) mask = 0; kvm_mmu_set_mmio_spte_mask(mask, mask, ACC_WRITE_MASK | ACC_USER_MASK); - - kvm_mmu_set_mask_ptes(PT_USER_MASK, PT_ACCESSED_MASK, - PT_DIRTY_MASK, PT64_NX_MASK, 0, - PT_PRESENT_MASK, 0, sme_me_mask); } diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 730076b3832f..6d7e760fdfa0 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -5443,23 +5443,6 @@ static void shrink_ple_window(struct kvm_vcpu *vcpu) } } -static void vmx_enable_tdp(void) -{ - kvm_mmu_set_mask_ptes(VMX_EPT_READABLE_MASK, - enable_ept_ad_bits ? VMX_EPT_ACCESS_BIT : 0ull, - enable_ept_ad_bits ? VMX_EPT_DIRTY_BIT : 0ull, - 0ull, VMX_EPT_EXECUTABLE_MASK, - cpu_has_vmx_ept_execute_only() ? 0ull : VMX_EPT_READABLE_MASK, - VMX_EPT_RWX_MASK, 0ull); - - /* - * EPT Misconfigurations can be generated if the value of bits 2:0 - * of an EPT paging-structure entry is 110b (write/execute). - */ - kvm_mmu_set_mmio_spte_mask(VMX_EPT_MISCONFIG_WX_VALUE, - VMX_EPT_RWX_MASK, 0); -} - /* * Indicate a busy-waiting vcpu in spinlock. We do not enable the PAUSE * exiting, so only get here on cpu with PAUSE-Loop-Exiting. @@ -7788,7 +7771,8 @@ static __init int hardware_setup(void) set_bit(0, vmx_vpid_bitmap); /* 0 is reserved for host */ if (enable_ept) - vmx_enable_tdp(); + kvm_mmu_set_ept_masks(enable_ept_ad_bits, + cpu_has_vmx_ept_execute_only()); if (!enable_ept) ept_lpage_level = 0; From patchwork Thu Feb 25 20:47:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12104971 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9DEDC433DB for ; Thu, 25 Feb 2021 20:55:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7E3D164E7A for ; Thu, 25 Feb 2021 20:55:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234711AbhBYUzB (ORCPT ); Thu, 25 Feb 2021 15:55:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50544 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234583AbhBYUwM (ORCPT ); Thu, 25 Feb 2021 15:52:12 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8E5ABC061788 for ; Thu, 25 Feb 2021 12:48:47 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id p136so7624510ybc.21 for ; Thu, 25 Feb 2021 12:48:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=KPwBN3kxwz361LXpz9SZI0brfn4PUGIyM8jlrlDx8Uc=; b=diozhjASaLDUbb2ByAoCABeedgKcXRJj/yekQAvg1TQktIpMPKFRiLsT3QAhMgtFYE KwbazfUAOsntv7/WDECQvoPJ27oX+T57ZuVkFyRBa4ikBHyg1T+WTBZQ94HSgQOpyNmx Qx8R9O+H68laTaQEuj+2h2VgqoNVYDde+IGYdVt4H/8dRgRpVsFB+3gyAvvlxHCOxHfM r/4TW30NzMUiaxCeMlTsTn0X8Fk6SJsHTsBpAISNbaFbLUcpio6x5MfnBu51qjzfXIQX buz7xyHby6bJUtR+taDfsTrW8v9zmMSqCdPHiUJAYoeOXIq/hT7pr7G3PvV6W1vmiBr4 8sSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=KPwBN3kxwz361LXpz9SZI0brfn4PUGIyM8jlrlDx8Uc=; b=iQWmwQM1YtgrX0EyEOnd+DJeZEsEZXicSw8vt+XrfeaAriDRGRlEneDuBBRJ1IdFB4 1seAqsOAVyBBDN743GvFIlCcUwRCHv88rxfrLMop7bL/+LghcFyQyEDr31f367R29ZAH EOXnPPyGlRR46oDNB4tOCm3ZFdXy2BnX4PeNaPx9sgz4hTRu2tBIeGR5VYzmpoiH85Zm graI0YKkBN5CjheDnc1nZj4+/4bBpEyIi2uDQ+DUMlf4XEy4tytal/GbzK+h2HMDSEaK mUF8SSdPk2c9yN8WVCvltoZbxxNyySKk4mcfwWkhbYSN+Bu0qcpFJ2XYJrRdNsET5iXu ArXQ== X-Gm-Message-State: AOAM531wmniIMjigWr3CBDibmiu/Qp2S/bzcQvvaDb0rgcVg0QTCsiSn N0BxNVI5tsuwO3Y66Xa4JxeIX/G5fZE= X-Google-Smtp-Source: ABdhPJxKUYSdGYv5Qx/boJeF4MiMwpQyNbZptET5Hx1lGbYcIdQLitjVyh8rklMQdT7MXRoKh1zo7VJALP0= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:34c4:7c1d:f9ba:4576]) (user=seanjc job=sendgmr) by 2002:a25:25d8:: with SMTP id l207mr7261484ybl.68.1614286126849; Thu, 25 Feb 2021 12:48:46 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 25 Feb 2021 12:47:43 -0800 In-Reply-To: <20210225204749.1512652-1-seanjc@google.com> Message-Id: <20210225204749.1512652-19-seanjc@google.com> Mime-Version: 1.0 References: <20210225204749.1512652-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 18/24] KVM: x86/mmu: Make Host-writable and MMU-writable bit locations dynamic From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Make the location of the HOST_WRITABLE and MMU_WRITABLE configurable for a given KVM instance. This will allow EPT to use high available bits, which in turn will free up bit 11 for a constant MMU_PRESENT bit. No functional change intended. Signed-off-by: Sean Christopherson --- Documentation/virt/kvm/locking.rst | 18 +++++++++--------- arch/x86/kvm/mmu.h | 12 ++++++------ arch/x86/kvm/mmu/mmu.c | 8 ++++---- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- arch/x86/kvm/mmu/spte.c | 13 +++++++++---- arch/x86/kvm/mmu/spte.h | 13 ++++++------- arch/x86/kvm/mmu/tdp_mmu.c | 6 +++--- 7 files changed, 38 insertions(+), 34 deletions(-) diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst index 85876afe0441..1fc860c007a3 100644 --- a/Documentation/virt/kvm/locking.rst +++ b/Documentation/virt/kvm/locking.rst @@ -44,18 +44,18 @@ following two cases: 2. Write-Protection: The SPTE is present and the fault is caused by write-protect. That means we just need to change the W bit of the spte. -What we use to avoid all the race is the SPTE_HOST_WRITEABLE bit and -SPTE_MMU_WRITEABLE bit on the spte: +What we use to avoid all the race is the Host-writable bit and MMU-writable bit +on the spte: -- SPTE_HOST_WRITEABLE means the gfn is writable on host. -- SPTE_MMU_WRITEABLE means the gfn is writable on mmu. The bit is set when - the gfn is writable on guest mmu and it is not write-protected by shadow - page write-protection. +- Host-writable means the gfn is writable in the host kernel page tables and in + its KVM memslot. +- MMU-writable means the gfn is writable in the guest's mmu and it is not + write-protected by shadow page write-protection. On fast page fault path, we will use cmpxchg to atomically set the spte W -bit if spte.SPTE_HOST_WRITEABLE = 1 and spte.SPTE_WRITE_PROTECT = 1, to -restore the saved R/X bits if for an access-traced spte, or both. This is -safe because whenever changing these bits can be detected by cmpxchg. +bit if spte.HOST_WRITEABLE = 1 and spte.WRITE_PROTECT = 1, to restore the saved +R/X bits if for an access-traced spte, or both. This is safe because whenever +changing these bits can be detected by cmpxchg. But we need carefully check these cases: diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 11cf7793cfee..72b0f66073dc 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -125,7 +125,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, * write-protects guest page to sync the guest modification, b) another one is * used to sync dirty bitmap when we do KVM_GET_DIRTY_LOG. The differences * between these two sorts are: - * 1) the first case clears SPTE_MMU_WRITEABLE bit. + * 1) the first case clears MMU-writable bit. * 2) the first case requires flushing tlb immediately avoiding corrupting * shadow page table between all vcpus so it should be in the protection of * mmu-lock. And the another case does not need to flush tlb until returning @@ -136,17 +136,17 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, * So, there is the problem: the first case can meet the corrupted tlb caused * by another case which write-protects pages but without flush tlb * immediately. In order to making the first case be aware this problem we let - * it flush tlb if we try to write-protect a spte whose SPTE_MMU_WRITEABLE bit - * is set, it works since another case never touches SPTE_MMU_WRITEABLE bit. + * it flush tlb if we try to write-protect a spte whose MMU-writable bit + * is set, it works since another case never touches MMU-writable bit. * * Anyway, whenever a spte is updated (only permission and status bits are - * changed) we need to check whether the spte with SPTE_MMU_WRITEABLE becomes + * changed) we need to check whether the spte with MMU-writable becomes * readonly, if that happens, we need to flush tlb. Fortunately, * mmu_spte_update() has already handled it perfectly. * - * The rules to use SPTE_MMU_WRITEABLE and PT_WRITABLE_MASK: + * The rules to use MMU-writable and PT_WRITABLE_MASK: * - if we want to see if it has writable tlb entry or if the spte can be - * writable on the mmu mapping, check SPTE_MMU_WRITEABLE, this is the most + * writable on the mmu mapping, check MMU-writable, this is the most * case, otherwise * - if we fix page fault on the spte or do write-protection by dirty logging, * check PT_WRITABLE_MASK. diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1fb500db46e0..e636fcd529d2 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1107,7 +1107,7 @@ static bool spte_write_protect(u64 *sptep, bool pt_protect) rmap_printk("spte %p %llx\n", sptep, *sptep); if (pt_protect) - spte &= ~SPTE_MMU_WRITEABLE; + spte &= ~shadow_mmu_writable_mask; spte = spte & ~PT_WRITABLE_MASK; return mmu_spte_update(sptep, spte); @@ -5485,9 +5485,9 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, * spte from present to present (changing the spte from present * to nonpresent will flush all the TLBs immediately), in other * words, the only case we care is mmu_spte_update() where we - * have checked SPTE_HOST_WRITEABLE | SPTE_MMU_WRITEABLE - * instead of PT_WRITABLE_MASK, that means it does not depend - * on PT_WRITABLE_MASK anymore. + * have checked Host-writable | MMU-writable instead of + * PT_WRITABLE_MASK, that means it does not depend on PT_WRITABLE_MASK + * anymore. */ if (flush) kvm_arch_flush_remote_tlbs_memslot(kvm, memslot); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 55d7b473ac44..8b9987d5fe02 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -1084,7 +1084,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) nr_present++; - host_writable = sp->spt[i] & SPTE_HOST_WRITEABLE; + host_writable = sp->spt[i] & shadow_host_writable_mask; set_spte_ret |= set_spte(vcpu, &sp->spt[i], pte_access, PG_LEVEL_4K, diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index ac5ea6fda969..2329ba60c67a 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -21,6 +21,8 @@ static bool __read_mostly enable_mmio_caching = true; module_param_named(mmio_caching, enable_mmio_caching, bool, 0444); +u64 __read_mostly shadow_host_writable_mask; +u64 __read_mostly shadow_mmu_writable_mask; u64 __read_mostly shadow_nx_mask; u64 __read_mostly shadow_x_mask; /* mutual exclusive with nx_mask */ u64 __read_mostly shadow_user_mask; @@ -137,7 +139,7 @@ int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, kvm_is_mmio_pfn(pfn)); if (host_writable) - spte |= SPTE_HOST_WRITEABLE; + spte |= shadow_host_writable_mask; else pte_access &= ~ACC_WRITE_MASK; @@ -147,7 +149,7 @@ int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, spte |= (u64)pfn << PAGE_SHIFT; if (pte_access & ACC_WRITE_MASK) { - spte |= PT_WRITABLE_MASK | SPTE_MMU_WRITEABLE; + spte |= PT_WRITABLE_MASK | shadow_mmu_writable_mask; /* * Optimization: for pte sync, if spte was writable the hash @@ -163,7 +165,7 @@ int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, __func__, gfn); ret |= SET_SPTE_WRITE_PROTECTED_PT; pte_access &= ~ACC_WRITE_MASK; - spte &= ~(PT_WRITABLE_MASK | SPTE_MMU_WRITEABLE); + spte &= ~(PT_WRITABLE_MASK | shadow_mmu_writable_mask); } } @@ -202,7 +204,7 @@ u64 kvm_mmu_changed_pte_notifier_make_spte(u64 old_spte, kvm_pfn_t new_pfn) new_spte |= (u64)new_pfn << PAGE_SHIFT; new_spte &= ~PT_WRITABLE_MASK; - new_spte &= ~SPTE_HOST_WRITEABLE; + new_spte &= ~shadow_host_writable_mask; new_spte = mark_spte_for_access_track(new_spte); @@ -342,6 +344,9 @@ void kvm_mmu_reset_all_pte_masks(void) shadow_acc_track_mask = 0; shadow_me_mask = sme_me_mask; + shadow_host_writable_mask = DEFAULT_SPTE_HOST_WRITEABLE; + shadow_mmu_writable_mask = DEFAULT_SPTE_MMU_WRITEABLE; + /* * Set a reserved PA bit in MMIO SPTEs to generate page faults with * PFEC.RSVD=1 on MMIO accesses. 64-bit PTEs (PAE, x86-64, and EPT diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index e918b8f0b21d..287540d211a9 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -5,8 +5,6 @@ #include "mmu_internal.h" -#define PT_FIRST_AVAIL_BITS_SHIFT 10 - /* * TDP SPTES (more specifically, EPT SPTEs) may not have A/D bits, and may also * be restricted to using write-protection (for L2 when CPU dirty logging, i.e. @@ -59,9 +57,8 @@ static_assert(SPTE_TDP_AD_ENABLED_MASK == 0); (((address) >> PT64_LEVEL_SHIFT(level)) & ((1 << PT64_LEVEL_BITS) - 1)) #define SHADOW_PT_INDEX(addr, level) PT64_INDEX(addr, level) - -#define SPTE_HOST_WRITEABLE (1ULL << PT_FIRST_AVAIL_BITS_SHIFT) -#define SPTE_MMU_WRITEABLE (1ULL << (PT_FIRST_AVAIL_BITS_SHIFT + 1)) +#define DEFAULT_SPTE_HOST_WRITEABLE BIT_ULL(10) +#define DEFAULT_SPTE_MMU_WRITEABLE BIT_ULL(11) /* * Due to limited space in PTEs, the MMIO generation is a 20 bit subset of @@ -100,6 +97,8 @@ static_assert(MMIO_SPTE_GEN_LOW_BITS == 9 && MMIO_SPTE_GEN_HIGH_BITS == 11); #define MMIO_SPTE_GEN_MASK GENMASK_ULL(MMIO_SPTE_GEN_LOW_BITS + MMIO_SPTE_GEN_HIGH_BITS - 1, 0) +extern u64 __read_mostly shadow_host_writable_mask; +extern u64 __read_mostly shadow_mmu_writable_mask; extern u64 __read_mostly shadow_nx_mask; extern u64 __read_mostly shadow_x_mask; /* mutual exclusive with nx_mask */ extern u64 __read_mostly shadow_user_mask; @@ -264,8 +263,8 @@ static inline bool is_dirty_spte(u64 spte) static inline bool spte_can_locklessly_be_made_writable(u64 spte) { - return (spte & (SPTE_HOST_WRITEABLE | SPTE_MMU_WRITEABLE)) == - (SPTE_HOST_WRITEABLE | SPTE_MMU_WRITEABLE); + return (spte & shadow_host_writable_mask) && + (spte & shadow_mmu_writable_mask); } static inline u64 get_mmio_spte_generation(u64 spte) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 782cae1eb5e1..bef0e1908e82 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1329,7 +1329,7 @@ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, /* * Removes write access on the last level SPTE mapping this GFN and unsets the - * SPTE_MMU_WRITABLE bit to ensure future writes continue to be intercepted. + * MMU-writable bit to ensure future writes continue to be intercepted. * Returns true if an SPTE was set and a TLB flush is needed. */ static bool write_protect_gfn(struct kvm *kvm, struct kvm_mmu_page *root, @@ -1346,7 +1346,7 @@ static bool write_protect_gfn(struct kvm *kvm, struct kvm_mmu_page *root, break; new_spte = iter.old_spte & - ~(PT_WRITABLE_MASK | SPTE_MMU_WRITEABLE); + ~(PT_WRITABLE_MASK | shadow_mmu_writable_mask); tdp_mmu_set_spte(kvm, &iter, new_spte); spte_set = true; @@ -1359,7 +1359,7 @@ static bool write_protect_gfn(struct kvm *kvm, struct kvm_mmu_page *root, /* * Removes write access on the last level SPTE mapping this GFN and unsets the - * SPTE_MMU_WRITABLE bit to ensure future writes continue to be intercepted. + * MMU-writable bit to ensure future writes continue to be intercepted. * Returns true if an SPTE was set and a TLB flush is needed. */ bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, From patchwork Thu Feb 25 20:47:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12104997 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83BA3C433E6 for ; Thu, 25 Feb 2021 20:57:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3EBD464E4D for ; Thu, 25 Feb 2021 20:57:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233952AbhBYU4w (ORCPT ); Thu, 25 Feb 2021 15:56:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234263AbhBYUwW (ORCPT ); Thu, 25 Feb 2021 15:52:22 -0500 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 409CDC061A29 for ; Thu, 25 Feb 2021 12:48:50 -0800 (PST) Received: by mail-qk1-x74a.google.com with SMTP id h126so5479540qkd.4 for ; Thu, 25 Feb 2021 12:48:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=VQspKyDp0Y0bfUsoSaP1PhHsDNHINEZk5uD+rHqlY2E=; b=IZBYHYQfGJBK9xJr7u9zaCocq6/mmkfADPR6KYiGkiyLFM8Jw/0INEI+nrHiJUagYF ugGIWXM0e+i3iqj1v94hAFZAATedTdn2piEMBZNy4kTJ2Kp/pSq3sG4hRmRKiP1P9/jb zuE9S3tnHG/zbtLsFtSWD0Ev0T1K1KGuU9XQs/eDKvUK+lO1BJdcnt1xKdqSfCrLR+jD I8r+YhcrasZSP4Ut07O6zMRKl/fNySb98ghcR5H8A5Q2WYhZFBGAaKiOklfwVybR8Beq z8WrACwELHujWK+dk9NKuSc6o0PhxtlBYClB1lOcXwJlVKw9y8icv3vTHKaCti0COKr2 h0nw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=VQspKyDp0Y0bfUsoSaP1PhHsDNHINEZk5uD+rHqlY2E=; b=R4/Zp0YOVLj8k4IYB3ZOM3XtGimLa2sKi7jQW/QqxkSCia3W8c6ptZZp4Oz6DBZIEv iyC+jQTUG96YWz/izS0FlqR3D2qtwbGs1KhABXXf8Fl/cKtyyVixwpDBZ8L+GgtzSTyy Hc96TIKu5eP8suLFRmi1uKb49RJAsmJTh6weNZ0r9uox6HyMuOnNV+HtVbGefexRu+uA CkZixzeSo2Pi8yXJjIThJsdNjL7Exb7OY+bMO2a1oksoe+8fUWHdSwjdXPqNhWyHzApp g1H+gc/tV5fqT1Ug+df3DG8LJkVEOHAWKJ6SXCzOvn1RRCBX4NVwDueDUkpg2WhTfHpp qFBw== X-Gm-Message-State: AOAM530hJWSO/Zt5H/Eap2sfHwpeL7Y8/XWYnuuLv+v/00GARx3GuWW6 FcSEfEBkdz3/+SAB+lUp2lbPGpf138w= X-Google-Smtp-Source: ABdhPJxSykM6u5auDhmdjhlL+DY0Hl39os4RzwegNpJDrCglpAo/ecE+BK1bC/4gTQO7OZf243wn16zjsEk= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:34c4:7c1d:f9ba:4576]) (user=seanjc job=sendgmr) by 2002:a05:6214:bce:: with SMTP id ff14mr4625737qvb.26.1614286129441; Thu, 25 Feb 2021 12:48:49 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 25 Feb 2021 12:47:44 -0800 In-Reply-To: <20210225204749.1512652-1-seanjc@google.com> Message-Id: <20210225204749.1512652-20-seanjc@google.com> Mime-Version: 1.0 References: <20210225204749.1512652-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 19/24] KVM: x86/mmu: Use high bits for host/mmu writable masks for EPT SPTEs From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use bits 57 and 58 for HOST_WRITABLE and MMU_WRITABLE when using EPT. This will allow using bit 11 as a constant MMU_PRESENT, which is desirable as checking for a shadow-present SPTE is one of the most common SPTE operations in KVM, particular in hot paths such as page faults. EPT is short on low available bits; currently only bit 11 is the only always-available bit. Bit 10 is also available, but only while KVM doesn't support mode-based execution. On the other hand, PAE paging doesn't have _any_ high available bits. Thus, using bit 11 is the only feasible option for MMU_PRESENT. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/spte.c | 3 +++ arch/x86/kvm/mmu/spte.h | 48 ++++++++++++++++++++++++++++------------- 2 files changed, 36 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 2329ba60c67a..d12acf5eb871 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -295,6 +295,9 @@ void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only) shadow_acc_track_mask = VMX_EPT_RWX_MASK; shadow_me_mask = 0ull; + shadow_host_writable_mask = EPT_SPTE_HOST_WRITABLE; + shadow_mmu_writable_mask = EPT_SPTE_MMU_WRITABLE; + /* * EPT Misconfigurations are generated if the value of bits 2:0 * of an EPT paging-structure entry is 110b (write/execute). diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 287540d211a9..8996baa8da15 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -57,8 +57,39 @@ static_assert(SPTE_TDP_AD_ENABLED_MASK == 0); (((address) >> PT64_LEVEL_SHIFT(level)) & ((1 << PT64_LEVEL_BITS) - 1)) #define SHADOW_PT_INDEX(addr, level) PT64_INDEX(addr, level) -#define DEFAULT_SPTE_HOST_WRITEABLE BIT_ULL(10) -#define DEFAULT_SPTE_MMU_WRITEABLE BIT_ULL(11) +/* Bits 9 and 10 are ignored by all non-EPT PTEs. */ +#define DEFAULT_SPTE_HOST_WRITEABLE BIT_ULL(9) +#define DEFAULT_SPTE_MMU_WRITEABLE BIT_ULL(10) + +/* + * The mask/shift to use for saving the original R/X bits when marking the PTE + * as not-present for access tracking purposes. We do not save the W bit as the + * PTEs being access tracked also need to be dirty tracked, so the W bit will be + * restored only when a write is attempted to the page. This mask obviously + * must not overlap the A/D type mask. + */ +#define SHADOW_ACC_TRACK_SAVED_BITS_MASK (PT64_EPT_READABLE_MASK | \ + PT64_EPT_EXECUTABLE_MASK) +#define SHADOW_ACC_TRACK_SAVED_BITS_SHIFT 54 +#define SHADOW_ACC_TRACK_SAVED_MASK (SHADOW_ACC_TRACK_SAVED_BITS_MASK << \ + SHADOW_ACC_TRACK_SAVED_BITS_SHIFT) +static_assert(!(SPTE_TDP_AD_MASK & SHADOW_ACC_TRACK_SAVED_MASK)); + +/* + * Low ignored bits are at a premium for EPT, use high ignored bits, taking care + * to not overlap the A/D type mask or the saved access bits of access-tracked + * SPTEs when A/D bits are disabled. + */ +#define EPT_SPTE_HOST_WRITABLE BIT_ULL(57) +#define EPT_SPTE_MMU_WRITABLE BIT_ULL(58) + +static_assert(!(EPT_SPTE_HOST_WRITABLE & SPTE_TDP_AD_MASK)); +static_assert(!(EPT_SPTE_MMU_WRITABLE & SPTE_TDP_AD_MASK)); +static_assert(!(EPT_SPTE_HOST_WRITABLE & SHADOW_ACC_TRACK_SAVED_MASK)); +static_assert(!(EPT_SPTE_MMU_WRITABLE & SHADOW_ACC_TRACK_SAVED_MASK)); + +/* Defined only to keep the above static asserts readable. */ +#undef SHADOW_ACC_TRACK_SAVED_MASK /* * Due to limited space in PTEs, the MMIO generation is a 20 bit subset of @@ -128,19 +159,6 @@ extern u64 __read_mostly shadow_nonpresent_or_rsvd_mask; */ #define SHADOW_NONPRESENT_OR_RSVD_MASK_LEN 5 -/* - * The mask/shift to use for saving the original R/X bits when marking the PTE - * as not-present for access tracking purposes. We do not save the W bit as the - * PTEs being access tracked also need to be dirty tracked, so the W bit will be - * restored only when a write is attempted to the page. This mask obviously - * must not overlap the A/D type mask. - */ -#define SHADOW_ACC_TRACK_SAVED_BITS_MASK (PT64_EPT_READABLE_MASK | \ - PT64_EPT_EXECUTABLE_MASK) -#define SHADOW_ACC_TRACK_SAVED_BITS_SHIFT 54 -static_assert(!(SPTE_TDP_AD_MASK & (SHADOW_ACC_TRACK_SAVED_BITS_MASK << - SHADOW_ACC_TRACK_SAVED_BITS_SHIFT))); - /* * If a thread running without exclusive control of the MMU lock must perform a * multi-part operation on an SPTE, it can set the SPTE to REMOVED_SPTE as a From patchwork Thu Feb 25 20:47:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12104975 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07685C433DB for ; Thu, 25 Feb 2021 20:55:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A872D64E7A for ; Thu, 25 Feb 2021 20:55:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232644AbhBYUze (ORCPT ); Thu, 25 Feb 2021 15:55:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50576 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234619AbhBYUw0 (ORCPT ); Thu, 25 Feb 2021 15:52:26 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 294DDC061A2B for ; Thu, 25 Feb 2021 12:48:53 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id v196so7617493ybv.3 for ; Thu, 25 Feb 2021 12:48:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=t3kQc1ZBqIKru+8m6mAJSLgdx/h6CrwvErQeXMobocU=; b=JzVdifboWjjdBoabYJyKIMT1I7ZxClYgeK0/EqMu/zIUFm8T85dOsQBAnqJQlMb1gC rbt61hJ6mROJUic2Miy+C1Q7Bd+8RqWTlKVAKb3JO8IFV69ntjEdt57jSs+csBL94OqG SYhktDOMwmihM5Wr4UCYS6VBZFsktVURY9TqtaUvhTrJqVGwEvVaowtPGYH0Nl1SFimQ d5Tw1fYLYm2fxIKwzI47nLFm+R8vvUdgZB9/2h9nzKCi75Dqnmv//L27A6e9umsKAxgc ZZOFasfrV86d6740D4gwmCCDSsMs7/iGdJd8p+QFvaWVhBPWsYFA/DaHg6JJTkgIJ85W oOlg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=t3kQc1ZBqIKru+8m6mAJSLgdx/h6CrwvErQeXMobocU=; b=oJ+UTewLRlfcf5UuDDwlwxqMY7Q2ANlK0ZiyD7t7DHX8MhWB1LnVGXGDR2HcL7Mmax 0o3ODsptVlDphQx3t28nTkPsII7g89sex8Oelmss/dvsl4Oom+37pPt2fodCpBTBPAqf SgaJD2ENdI2yiwQSjKCGxrBpRmcmGVXTa8fgoTlou1R0qNqxo5zdPN6fQhiDUF6AWSx9 IJmQkIXnHdCRuyawgP9BbGzDSstluXlf1+Z+ARwtoNmYmKnK6CEJUEDnJuhWSvx2346x qPHGpS439i71eeUOUW4FGP+Fdh/VMVL4XY8O1F7wWZF0SFlWFuinAVbUQFZx1osuPD8f 98rQ== X-Gm-Message-State: AOAM530ffHGeDf8EWvJE5SVC+AVBj7b/sZ2SsQPtVSGmydZaZN4hwjMy keNPAZvL3aVem3TLI/kSd1V3kTMOh/g= X-Google-Smtp-Source: ABdhPJxGqwh5Gc2SUSJAX+FZVrGQ01uPxr06LB6UpPw+d7UXzvEaHnWkfch5fwCICyW4yXWzpsaZhrpgGBc= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:34c4:7c1d:f9ba:4576]) (user=seanjc job=sendgmr) by 2002:a25:5ac2:: with SMTP id o185mr7236810ybb.252.1614286132332; Thu, 25 Feb 2021 12:48:52 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 25 Feb 2021 12:47:45 -0800 In-Reply-To: <20210225204749.1512652-1-seanjc@google.com> Message-Id: <20210225204749.1512652-21-seanjc@google.com> Mime-Version: 1.0 References: <20210225204749.1512652-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 20/24] KVM: x86/mmu: Use a dedicated bit to track shadow/MMU-present SPTEs From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Introduce MMU_PRESENT to explicitly track which SPTEs are "present" from the MMU's perspective. Checking for shadow-present SPTEs is a very common operation for the MMU, particularly in hot paths such as page faults. With the addition of "removed" SPTEs for the TDP MMU, identifying shadow-present SPTEs is quite costly especially since it requires checking multiple 64-bit values. On 64-bit KVM, this reduces the footprint of kvm.ko's .text by ~2k bytes. On 32-bit KVM, this increases the footprint by ~200 bytes, but only because gcc now inlines several more MMU helpers, e.g. drop_parent_pte(). Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/spte.c | 8 ++++---- arch/x86/kvm/mmu/spte.h | 11 ++++++++++- 2 files changed, 14 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index d12acf5eb871..e07aabb23b8a 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -94,7 +94,7 @@ int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, bool can_unsync, bool host_writable, bool ad_disabled, u64 *new_spte) { - u64 spte = 0; + u64 spte = SPTE_MMU_PRESENT_MASK; int ret = 0; if (ad_disabled) @@ -183,10 +183,10 @@ int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled) { - u64 spte; + u64 spte = SPTE_MMU_PRESENT_MASK; - spte = __pa(child_pt) | shadow_present_mask | PT_WRITABLE_MASK | - shadow_user_mask | shadow_x_mask | shadow_me_mask; + spte |= __pa(child_pt) | shadow_present_mask | PT_WRITABLE_MASK | + shadow_user_mask | shadow_x_mask | shadow_me_mask; if (ad_disabled) spte |= SPTE_TDP_AD_DISABLED_MASK; diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 8996baa8da15..645e9bc2d4a2 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -5,6 +5,15 @@ #include "mmu_internal.h" +/* + * A MMU present SPTE is backed by actual memory and may or may not be present + * in hardware. E.g. MMIO SPTEs are not considered present. Use bit 11, as it + * is ignored by all flavors of SPTEs and checking a low bit often generates + * better code than for a high bit, e.g. 56+. MMU present checks are pervasive + * enough that the improved code generation is noticeable in KVM's footprint. + */ +#define SPTE_MMU_PRESENT_MASK BIT_ULL(11) + /* * TDP SPTES (more specifically, EPT SPTEs) may not have A/D bits, and may also * be restricted to using write-protection (for L2 when CPU dirty logging, i.e. @@ -241,7 +250,7 @@ static inline bool is_access_track_spte(u64 spte) static inline bool is_shadow_present_pte(u64 pte) { - return (pte != 0) && !is_mmio_spte(pte) && !is_removed_spte(pte); + return !!(pte & SPTE_MMU_PRESENT_MASK); } static inline bool is_large_pte(u64 pte) From patchwork Thu Feb 25 20:47:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12104995 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02E40C433DB for ; Thu, 25 Feb 2021 20:57:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C02C864E4D for ; Thu, 25 Feb 2021 20:57:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234833AbhBYU4o (ORCPT ); Thu, 25 Feb 2021 15:56:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234322AbhBYUwh (ORCPT ); Thu, 25 Feb 2021 15:52:37 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ECDBFC061A2E for ; Thu, 25 Feb 2021 12:48:55 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id u1so7581460ybu.14 for ; Thu, 25 Feb 2021 12:48:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=pSgzcPeZf7CkEyKNT+UwT5BWJSvRnduOvKwxLFb/Sls=; b=TBmuXSjKcURclFdq9gj6ZPJwNEOkGFuZvmeiwtuR2iDyfiEmsbnL8B5qNH/rXy1JR+ VHjplhHdfNYXbHEM6d0jXneCHW194c8TZWVoNojdTEP37fblNS8FL9jCmab23FWu45Ns nwKvDJn+4RqTWofvtjmhRtLSmEtc0G0f5c8qCesLctd5jL7cMDlquUyyXEAPEpm0yUMf u/Nu3R2Kv7jsMqzskDLe48jDtCgVS9dZk4jZigwRa0qh1/8OcjN8lFxWoMqrR2ruE50n qTVYV363Yjhh9nEmBgKGyQvvECRNe87Va7VU60vD6OZjtrdq0JteJ/3XOe3FUIIzsjRc HsOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=pSgzcPeZf7CkEyKNT+UwT5BWJSvRnduOvKwxLFb/Sls=; b=b2l3okkaRaIA5Q6S/qeRQNSHZzrGzcsrtbfbG/VCUTSmAZwDjGeFMIE98GpdFS+u04 awHLIS7Q5JWqkz7Q1fltLZd7zqWUe5eVEpvMjyoo8BawD7cSvXDj83eP/gyJDzmiHres hz5GTdNVO6MEdeiW17raSWvxSVDkjanMj86jKRp6936jLQ/opNe/sgpr1vLKRt24exb5 ot5P5J0Unx+74ZkARYHrO+ZmGD9p2cun3K7mogyaAEf1Wb8i0KtuRTuKJMeWZDv9k8S5 4G782bLXFpF4GcfZx837xPYOO9Dgcst0rPki3lHSbOcnoTwygqkxfyYsgHohuA5nGRBI AluA== X-Gm-Message-State: AOAM533gRz6BZGvKH/xtTOE5pdNcuUnoCl08iw5uFvICt1QUv75jrAP5 57Rc+aYh31qZyiSQeblNgtGWpcqusbM= X-Google-Smtp-Source: ABdhPJzM+tP1BCqhl0RNwjgmMOiVVSkdZlGjTMGi6o07If4pYkbeB92gckA1zO0PISIf9TNUDxKRApSkHSs= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:34c4:7c1d:f9ba:4576]) (user=seanjc job=sendgmr) by 2002:a25:af52:: with SMTP id c18mr7183399ybj.196.1614286135191; Thu, 25 Feb 2021 12:48:55 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 25 Feb 2021 12:47:46 -0800 In-Reply-To: <20210225204749.1512652-1-seanjc@google.com> Message-Id: <20210225204749.1512652-22-seanjc@google.com> Mime-Version: 1.0 References: <20210225204749.1512652-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 21/24] KVM: x86/mmu: Tweak auditing WARN for A/D bits to !PRESENT (was MMIO) From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Tweak the MMU_WARN that guards against weirdness when querying A/D status to fire on a !MMU_PRESENT SPTE, as opposed to a MMIO SPTE. Attempting to query A/D status on any kind of !MMU_PRESENT SPTE, MMIO or otherwise, indicates a KVM bug. Case in point, several now-fixed bugs were identified by enabling this new WARN. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/spte.h | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 645e9bc2d4a2..2fad4ccd3679 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -209,6 +209,11 @@ static inline bool is_mmio_spte(u64 spte) likely(shadow_mmio_value); } +static inline bool is_shadow_present_pte(u64 pte) +{ + return !!(pte & SPTE_MMU_PRESENT_MASK); +} + static inline bool sp_ad_disabled(struct kvm_mmu_page *sp) { return sp->role.ad_disabled; @@ -216,13 +221,13 @@ static inline bool sp_ad_disabled(struct kvm_mmu_page *sp) static inline bool spte_ad_enabled(u64 spte) { - MMU_WARN_ON(is_mmio_spte(spte)); + MMU_WARN_ON(!is_shadow_present_pte(spte)); return (spte & SPTE_TDP_AD_MASK) != SPTE_TDP_AD_DISABLED_MASK; } static inline bool spte_ad_need_write_protect(u64 spte) { - MMU_WARN_ON(is_mmio_spte(spte)); + MMU_WARN_ON(!is_shadow_present_pte(spte)); /* * This is benign for non-TDP SPTEs as SPTE_TDP_AD_ENABLED_MASK is '0', * and non-TDP SPTEs will never set these bits. Optimize for 64-bit @@ -233,13 +238,13 @@ static inline bool spte_ad_need_write_protect(u64 spte) static inline u64 spte_shadow_accessed_mask(u64 spte) { - MMU_WARN_ON(is_mmio_spte(spte)); + MMU_WARN_ON(!is_shadow_present_pte(spte)); return spte_ad_enabled(spte) ? shadow_accessed_mask : 0; } static inline u64 spte_shadow_dirty_mask(u64 spte) { - MMU_WARN_ON(is_mmio_spte(spte)); + MMU_WARN_ON(!is_shadow_present_pte(spte)); return spte_ad_enabled(spte) ? shadow_dirty_mask : 0; } @@ -248,11 +253,6 @@ static inline bool is_access_track_spte(u64 spte) return !spte_ad_enabled(spte) && (spte & shadow_acc_track_mask) == 0; } -static inline bool is_shadow_present_pte(u64 pte) -{ - return !!(pte & SPTE_MMU_PRESENT_MASK); -} - static inline bool is_large_pte(u64 pte) { return pte & PT_PAGE_SIZE_MASK; From patchwork Thu Feb 25 20:47:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12104977 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B643C433E0 for ; Thu, 25 Feb 2021 20:56:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E658064E7A for ; Thu, 25 Feb 2021 20:56:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234663AbhBYUzv (ORCPT ); Thu, 25 Feb 2021 15:55:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234703AbhBYUwj (ORCPT ); Thu, 25 Feb 2021 15:52:39 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A1C87C061A30 for ; Thu, 25 Feb 2021 12:48:58 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id j4so7503584ybt.23 for ; Thu, 25 Feb 2021 12:48:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=y8jiycfjUxxVO938X7a576VwC4HXLrVyvTnTPoiD+9s=; b=eNCsODt7UEHMvR1/ED+Zj+vz21xpXM/UQzn/+fZ0hiti6zgJjcBXAFJGQHgoD/KcQy xc3OLsfeE+yFMC3MvjVP05B16WvpjlNqzE1NopMAgH438ueOh4228lgH4WSj0nAXgqcA mva+SKPevaWJ3y9nlYApr3VMpwnHODzAu+a7GOwByn1VoWZcVNk5UQGztQ0s8Uy4Bun/ ljpouolUDjt+bWiDo1feIAzeqKADIObgXHOHoJVmm5AN6K4tuJ62PAzDAQhe9qEJAMAc bAi251md3xfZs0xP3GRNhRf9F2s4LdBLL0qoF0OIac0BzUlZO4KiDMGb4j07161ctLbO wTmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=y8jiycfjUxxVO938X7a576VwC4HXLrVyvTnTPoiD+9s=; b=t6maTwcIoNiRe2i4E6NavBeGVIoeLfetqkSVQJAE26fC7+N9Va4wEW9zqEYRcJ+gGa q4zSiYZKpQqsKmQGbA4d5XTtQ9CHsRgGymwOp5c0u9N0TuM12b3jDjXAPWksfzxez6fm aXYhUYA1pN4puVvc11j4APj6ACWbP2/BpwdPwRjDKtWk+GYmJVCVJJSm3ee0dcp6sTBu SmCbnkDkgUaU3gks75XJ/nXpOljRt3AU3QZGVkdP6bNvAJtyx5933cSoOa/XxX4jvpzZ RLYZd49fzz4SG8uY6jMnWzZw9+8a7chJgnm+2fAYOBUlZc8EOUtKLIsrIpDKXGb1XsrZ ZpdA== X-Gm-Message-State: AOAM531skiaM9GM7spx4N8IVRuZcxfEAvrDv4WicaqRbdcoMxnGV+S/W 8UilVmmxLXeDkcRkDWsiw47zZ/4ufTo= X-Google-Smtp-Source: ABdhPJwJMDv/GfYEkzf+62vi+Eu4Ngvn08CT2146trjLMXX0WtWN34n3K9I88D3lijgn6A2CyulqkDQl0pE= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:34c4:7c1d:f9ba:4576]) (user=seanjc job=sendgmr) by 2002:a25:6191:: with SMTP id v139mr6967857ybb.447.1614286137935; Thu, 25 Feb 2021 12:48:57 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 25 Feb 2021 12:47:47 -0800 In-Reply-To: <20210225204749.1512652-1-seanjc@google.com> Message-Id: <20210225204749.1512652-23-seanjc@google.com> Mime-Version: 1.0 References: <20210225204749.1512652-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 22/24] KVM: x86/mmu: Use is_removed_spte() instead of open coded equivalents From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use the is_removed_spte() helper instead of open coding the check. No functional change intended. Cc: Ben Gardon Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/tdp_mmu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index bef0e1908e82..7f2c4760b84d 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -490,7 +490,7 @@ static inline bool tdp_mmu_set_spte_atomic(struct kvm *kvm, * Do not change removed SPTEs. Only the thread that froze the SPTE * may modify it. */ - if (iter->old_spte == REMOVED_SPTE) + if (is_removed_spte(iter->old_spte)) return false; if (cmpxchg64(rcu_dereference(iter->sptep), iter->old_spte, @@ -565,7 +565,7 @@ static inline void __tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, * should be used. If operating under the MMU lock in write mode, the * use of the removed SPTE should not be necessary. */ - WARN_ON(iter->old_spte == REMOVED_SPTE); + WARN_ON(is_removed_spte(iter->old_spte)); WRITE_ONCE(*rcu_dereference(iter->sptep), new_spte); From patchwork Thu Feb 25 20:47:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12104981 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6EBB9C433DB for ; Thu, 25 Feb 2021 20:56:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 48C4E64EAF for ; Thu, 25 Feb 2021 20:56:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233916AbhBYU4V (ORCPT ); Thu, 25 Feb 2021 15:56:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234728AbhBYUww (ORCPT ); Thu, 25 Feb 2021 15:52:52 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 069A7C0610CB for ; Thu, 25 Feb 2021 12:49:02 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id v62so7555658ybb.15 for ; Thu, 25 Feb 2021 12:49:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=wygaDEDOrN320QX+DWUCOdywurQj1DTKqvUyYvvMFVc=; b=XBC/vyeLmlxmeTcNjRlbqndRFXV+OZIx4QATdTRaHc1LavPxAbhNt3VVKMB9y3UVST 1tpr+Hi90L0/JFgKAzP4XqkVHr8L3PP8fjUn7CUwzfeiS7KYy+QonEMof+4S0Rc8xPj+ kP7bjvKDGpSZQXG/IUUY+IJzoqcvjUskD/Ye24nnzM9VHLhVVxQ91yd5E5AtcjTHzCdk nKNL5qO+xSHMZyLjXkhhNmc8IlsQ3hFFBYBZIll65QqzbADKcYbHTRnw92DgrqQ88DB6 jO+kb/ADc4XSnonJlCOjR24kcqI/o9XmUcw0bp0z8ZHNHSjnb51y2qS17+zOkNn6RnCr z38A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=wygaDEDOrN320QX+DWUCOdywurQj1DTKqvUyYvvMFVc=; b=MHMos+vpQqsm9w+fxuXGFVdnyUaSnfVN9eZdcFuYoLj+X+x5epfViMHH3K2ZUVjS4O SwCnFcGWPppUTdY2elnzQ2bfJHBDv+XvrxtNbyr1j+naovbqvkhwqFDu5avr4sQV0yBO 0uaoBR26oZzcWKjobvmOEwigwSA5b0fci65lceOXqxQs/6vu2RlpvrCxtQtcVcaRnbyq TSdj1j2Nksg265a2ov14Vlq5t38Juuedqj797ECRIJrkgU6N0WGW2H+hoJo9MHMM8G/1 KXcQjskYwTzNOIs2fo0AN9iWTGyWNQ0YRhJTEBO2ZyenSjBPjvKu08yeBkJlHa4pU6re pE4Q== X-Gm-Message-State: AOAM531iaX/uIPlnvKO4FqmSHcvponVWHpY7r5/nsXWoGVt1kWsuuXTA 37NxlYNRA8w+cKT84yvQenq1nAEz3ns= X-Google-Smtp-Source: ABdhPJzLHrCi7n/k+RGSDGErQ6Z/2NGtl3jE/sgHMSbkuQMSHmfEfyHVaWh3au5M/Qq0b4enfZio+I+VsQE= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:34c4:7c1d:f9ba:4576]) (user=seanjc job=sendgmr) by 2002:a25:9108:: with SMTP id v8mr6916513ybl.321.1614286140867; Thu, 25 Feb 2021 12:49:00 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 25 Feb 2021 12:47:48 -0800 In-Reply-To: <20210225204749.1512652-1-seanjc@google.com> Message-Id: <20210225204749.1512652-24-seanjc@google.com> Mime-Version: 1.0 References: <20210225204749.1512652-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 23/24] KVM: x86/mmu: Use low available bits for removed SPTEs From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use low "available" bits to tag REMOVED SPTEs. Using a high bit is moderately costly as it often causes the compiler to generate a 64-bit immediate. More importantly, this makes it very clear REMOVED_SPTE is a value, not a flag. Cc: Ben Gardon Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/spte.c | 11 ++++++++++- arch/x86/kvm/mmu/spte.h | 11 +++++++---- 2 files changed, 17 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index e07aabb23b8a..66d43cec0c31 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -277,7 +277,16 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask) SHADOW_NONPRESENT_OR_RSVD_MASK_LEN))) mmio_value = 0; - WARN_ON((mmio_value & mmio_mask) != mmio_value); + /* + * The masked MMIO value must obviously match itself and a removed SPTE + * must not get a false positive. Removed SPTEs and MMIO SPTEs should + * never collide as MMIO must set some RWX bits, and removed SPTEs must + * not set any RWX bits. + */ + if (WARN_ON((mmio_value & mmio_mask) != mmio_value) || + WARN_ON(mmio_value && (REMOVED_SPTE & mmio_mask) == mmio_value)) + mmio_value = 0; + shadow_mmio_value = mmio_value; shadow_mmio_mask = mmio_mask; shadow_mmio_access_mask = access_mask; diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 2fad4ccd3679..b53036d9ddf3 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -174,13 +174,16 @@ extern u64 __read_mostly shadow_nonpresent_or_rsvd_mask; * non-present intermediate value. Other threads which encounter this value * should not modify the SPTE. * - * This constant works because it is considered non-present on both AMD and - * Intel CPUs and does not create a L1TF vulnerability because the pfn section - * is zeroed out. + * Use a semi-arbitrary value that doesn't set RWX bits, i.e. is not-present on + * bot AMD and Intel CPUs, and doesn't set PFN bits, i.e. doesn't create a L1TF + * vulnerability. Use only low bits to avoid 64-bit immediates. * * Only used by the TDP MMU. */ -#define REMOVED_SPTE (1ull << 59) +#define REMOVED_SPTE 0x5a0ULL + +/* Removed SPTEs must not be misconstrued as shadow present PTEs. */ +static_assert(!(REMOVED_SPTE & SPTE_MMU_PRESENT_MASK)); static inline bool is_removed_spte(u64 spte) { From patchwork Thu Feb 25 20:47:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12104979 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40D9EC433E6 for ; Thu, 25 Feb 2021 20:56:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0E8EB64E7A for ; Thu, 25 Feb 2021 20:56:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235102AbhBYU4K (ORCPT ); Thu, 25 Feb 2021 15:56:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50862 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234757AbhBYUwy (ORCPT ); Thu, 25 Feb 2021 15:52:54 -0500 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A6172C0610CC for ; Thu, 25 Feb 2021 12:49:04 -0800 (PST) Received: by mail-qk1-x749.google.com with SMTP id p27so5487296qkp.8 for ; Thu, 25 Feb 2021 12:49:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=l9F4DorkP8gKSgQry0nnCoDwWHQFJ/Yd6axivFAM3KI=; b=BXGXxUgQwzbVMrMVuXoL6Bp+6pkXFq5rQ7HKLURvABsuJOVvgcdCTRaHjGRq8tVrj7 GOr+wqSc/eYIT8EltXKfXDeQyc/uVLVlDH/H/BpvBT7eZgxUSiCI2ZQhgMg0sTltcJPs 7fwR8SgK8tUT9zG08ZUotGyxO6ac0acL8/4e77/odr2ucncR/3QwkGITyC1b4auIBz3n OqBkeMO8c5JsIh/UJiMp7NQJq0hd8eIcvhNVEYR0IdUp4DC2riVaJ9S3IaS54dqOON8E m2NT/ivmdMW1e5dLtYIN7xfhqpWkbGtlOy94Wy1F5eggDQxyKAfIvBCI9pjLZi8VWcWu WTMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=l9F4DorkP8gKSgQry0nnCoDwWHQFJ/Yd6axivFAM3KI=; b=rrLgkxVrMoeXHtQuv0iTCHgX2Fe/KUDdFDb4uCv0K7IU0RY6mg7VWEl73F+8VJjIVP RG6wTZv/I44McqwQhTh4F9/JRS98f7ncctMC8rM+Xfi0aMu3lCEgPv/RfDAv7FrBCzo9 kKsDM45DzNamCksHJ3+o9BRW/2WBsdU3nciKuZj5x0RWIeMIKxKfEPhC1IduYISPE2e9 vBYzcylDMBqLZFAfr/T2hPaZlOkDqogHFRMsjG8MEBDqAGsliJIoG5d6lcfLUiW85/6M v+VeRVK145NBJ0qtVCTNXPwJ8O3hPt+0mRyMbfIwoyYVz+W2y/y0jti+fmTyiw77Lrir XZxQ== X-Gm-Message-State: AOAM532ejAjYxRJ3Ymz5+S6rfhwbUfHnzFMEMUKvywNVvANSi3LObEKn ZTm8D1sFw9FMczIeBvqMt2mNO9ySbwo= X-Google-Smtp-Source: ABdhPJxZysNY09Fn0/MqcmMOKBvO+oahfT8mzibqA2JJ3M8rPKMWbd+I3ciJ3VpHGVUfIxoxY98OO/RZSQI= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:34c4:7c1d:f9ba:4576]) (user=seanjc job=sendgmr) by 2002:ad4:59c7:: with SMTP id el7mr4534210qvb.16.1614286143847; Thu, 25 Feb 2021 12:49:03 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 25 Feb 2021 12:47:49 -0800 In-Reply-To: <20210225204749.1512652-1-seanjc@google.com> Message-Id: <20210225204749.1512652-25-seanjc@google.com> Mime-Version: 1.0 References: <20210225204749.1512652-1-seanjc@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 24/24] KVM: x86/mmu: Dump reserved bits if they're detected on non-MMIO SPTE From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Debugging unexpected reserved bit page faults sucks. Dump the reserved bits that (likely) caused the page fault to make debugging suck a little less. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e636fcd529d2..dab0e950a54e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3555,11 +3555,12 @@ static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep) __is_rsvd_bits_set(rsvd_check, sptes[level], level); if (reserved) { - pr_err("%s: detect reserved bits on spte, addr 0x%llx, dump hierarchy:\n", + pr_err("%s: reserved bits set on MMU-present spte, addr 0x%llx, hierarchy:\n", __func__, addr); for (level = root; level >= leaf; level--) - pr_err("------ spte 0x%llx level %d.\n", - sptes[level], level); + pr_err("------ spte = 0x%llx level = %d, rsvd bits = 0x%llx", + sptes[level], level, + rsvd_check->rsvd_bits_mask[(sptes[level] >> 7) & 1][level-1]); } return reserved;