From patchwork Mon Jun 13 22:57:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12880273 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A065C43334 for ; Mon, 13 Jun 2022 22:58:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241482AbiFMW6N (ORCPT ); Mon, 13 Jun 2022 18:58:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57378 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230126AbiFMW5g (ORCPT ); Mon, 13 Jun 2022 18:57:36 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60D3910E5 for ; Mon, 13 Jun 2022 15:57:28 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id d125-20020a636883000000b003db5e24db27so4018423pgc.13 for ; Mon, 13 Jun 2022 15:57:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=dqOTd+rrEbe0Ho1KL+nrXG1TMVjZBcGq3Scgu3Hlczc=; b=G2vOugzu0HbRynHAIdfDvh7H6N8NAGp6OeQXZpZEPcoDoGxCjUfJtyrMb3Sup/4kFm UNEVjfAQsq1t27BPYIgf8RCWCi+Kl0Guq5Kaqyvj5Mp3Z5yiTWq1cFX3kk0m0Yo5MGZM OLllczkwvkBu2yDMqgQWndoHXH8PHXWQEKtSreWMXQc4uBTLX2YAOnUxIKr+f03kcCQw uFOQ6XzWTYeVn2eIKI8Gor8qkHvlybXEyCdJlgV5yjSekW3oBiCWRRwvh81U/BlPNR8R RrrZh7vfuqRkfsThIb8YazidnNzNDqp6XwpoE/MARqV0PlZdygplQfsvxUQZ7BVh7aRn VluQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=dqOTd+rrEbe0Ho1KL+nrXG1TMVjZBcGq3Scgu3Hlczc=; b=oYfPGfL6zd7qztO/VNFl2pS0GPoWFcgywGHxVUIs+jPwQ2Uak4POXkZgk7GPwDlGga GiEdDb4qODNTBcte0oWWQTR/tC1FIJMJexWmHl2DjInc6euNbGgImLRLxDejDIAatZ80 HCaVppeg26ZND/a/+3KdQploCtHCza95lxqSG12H0hKNdx8Jg9467w/JVfixNLoHcWZ/ P7pCCb54R+qs0K7vgqA/smvH9Xy8QeT4CFWPh5UFV1wtWmhmYKB9LphK9Yh8BXlnAqrc rWnHq/HMxterLBd4jjQwFNQIO/cbKHvckiWAkeFSLrcWOfNXb43Q83NBd+g73iJ3NAOW BvOA== X-Gm-Message-State: AJIora+LXhI3qkT/xyUHMn84ILL5azD0wMXZImO3jcStozBrXYVbXUJ2 c30BUuYUVAWmxt+6yUyigRW2w5UkEMs= X-Google-Smtp-Source: AGRyM1s2pBEr39PeNzqKK5HLiCeXP22OWsE3v+YL7CLPBqUac5YpEvSaxDH40rsOJG+z5l0HV6tHa3Sk/h4= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:902:b909:b0:167:8c44:9bc1 with SMTP id bf9-20020a170902b90900b001678c449bc1mr1584462plb.47.1655161047715; Mon, 13 Jun 2022 15:57:27 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 13 Jun 2022 22:57:16 +0000 In-Reply-To: <20220613225723.2734132-1-seanjc@google.com> Message-Id: <20220613225723.2734132-2-seanjc@google.com> Mime-Version: 1.0 References: <20220613225723.2734132-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.476.g0c4daa206d-goog Subject: [PATCH 1/8] KVM: x86/mmu: Drop unused CMPXCHG macro from paging_tmpl.h From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Drop the CMPXCHG macro from paging_tmpl.h, it's no longer used now that KVM uses a common uaccess helper to do 8-byte CMPXCHG. Fixes: f122dfe44768 ("KVM: x86: Use __try_cmpxchg_user() to update guest PTE A/D bits") Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/paging_tmpl.h | 6 ------ 1 file changed, 6 deletions(-) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index fe35d8fd3276..f595c4b8657f 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -34,7 +34,6 @@ #define PT_HAVE_ACCESSED_DIRTY(mmu) true #ifdef CONFIG_X86_64 #define PT_MAX_FULL_LEVELS PT64_ROOT_MAX_LEVEL - #define CMPXCHG "cmpxchgq" #else #define PT_MAX_FULL_LEVELS 2 #endif @@ -51,7 +50,6 @@ #define PT_GUEST_DIRTY_SHIFT PT_DIRTY_SHIFT #define PT_GUEST_ACCESSED_SHIFT PT_ACCESSED_SHIFT #define PT_HAVE_ACCESSED_DIRTY(mmu) true - #define CMPXCHG "cmpxchgl" #elif PTTYPE == PTTYPE_EPT #define pt_element_t u64 #define guest_walker guest_walkerEPT @@ -64,9 +62,6 @@ #define PT_GUEST_DIRTY_SHIFT 9 #define PT_GUEST_ACCESSED_SHIFT 8 #define PT_HAVE_ACCESSED_DIRTY(mmu) (!(mmu)->cpu_role.base.ad_disabled) - #ifdef CONFIG_X86_64 - #define CMPXCHG "cmpxchgq" - #endif #define PT_MAX_FULL_LEVELS PT64_ROOT_MAX_LEVEL #else #error Invalid PTTYPE value @@ -1100,7 +1095,6 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) #undef PT_MAX_FULL_LEVELS #undef gpte_to_gfn #undef gpte_to_gfn_lvl -#undef CMPXCHG #undef PT_GUEST_ACCESSED_MASK #undef PT_GUEST_DIRTY_MASK #undef PT_GUEST_DIRTY_SHIFT From patchwork Mon Jun 13 22:57:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12880279 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80BD1CCA47B for ; Mon, 13 Jun 2022 23:01:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245123AbiFMW6V (ORCPT ); Mon, 13 Jun 2022 18:58:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57382 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234195AbiFMW5g (ORCPT ); Mon, 13 Jun 2022 18:57:36 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60EC91151 for ; Mon, 13 Jun 2022 15:57:30 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id j11-20020a05690212cb00b006454988d225so6114756ybu.10 for ; Mon, 13 Jun 2022 15:57:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=66YOE3D++YjiCQYAOPbfPb7xNbNJtfXwRmXra4+Vga0=; b=OGQQT/kEl+HJEIxg1pFOPK9rS28ovUya8ri7Z+Zs4qy6oaWb7o7khwI7JVXjbdrJXe DLQxDiuUabALcAyEtJUjYxq2JOljiWGRFUIj7pSxS7hoG/Zy2Bv/zq2FowrMCezO3plx X1ECBFm5XlSTdkMkilClv/2pAGu+Cq4sz1O2+6Cyo9AJdmdxqvjUlHg1v/bxfGMWajBQ gPBXlGgq38ijIWB767GbrfGwhU4V3hw0R4AdHzTldbQWT1tIWyhGKf1Fln1qYgTg6daa 0Bm/KBbPyObSermiYbLLx6MB+OJvj+oL+15400uar6hxXbM5fdgTHTVkDhWKGJynREYW r4vA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=66YOE3D++YjiCQYAOPbfPb7xNbNJtfXwRmXra4+Vga0=; b=35PLn2dFcZKO3oJXm7py+WZLcOqebMugWqCJwwHt+tv8nuPItr/4dNuq7/jWuy/IMm 1WKvA/lb84f6zLOs7RW6w/AGDJqqd1ddRZGNjsgP+KSF793IyXIXfeOfQrY0Qzu5t89V sZ4A5BRUHOHZVDlTNV0dsmIGHqbYup1DxpiqGmxhbrLEHn69rcRrXYAU65NVajgkTsOo Hl0a45NS4OYgxJUPssvJT99kNfP7+N4r4qEfaxSd9Drysid9Q1Jz5iszYVNUZXmi41Vm QPSskON1MgpiHKy6nrfTSCoUW9R2Muwqtg02gYYbZT7VVUZUAp6eqy98BGxbVDyluIsW EjOA== X-Gm-Message-State: AJIora8Vk1XgtuwwGKXariuYOnduVCqkUEM0/N//YboARqfw3MpoTA5m anNWTzZQzstCV78a4pp9b72hdm57zQI= X-Google-Smtp-Source: AGRyM1vFi9XO4bi7ATHD/tcJRWZuWdYliTFf3K/ZUTGXuMM7BsmCw/F9cNYih8H8aUm6iETOgytGKbTPvGk= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a5b:101:0:b0:64d:ae10:3d26 with SMTP id 1-20020a5b0101000000b0064dae103d26mr1933397ybx.103.1655161049615; Mon, 13 Jun 2022 15:57:29 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 13 Jun 2022 22:57:17 +0000 In-Reply-To: <20220613225723.2734132-1-seanjc@google.com> Message-Id: <20220613225723.2734132-3-seanjc@google.com> Mime-Version: 1.0 References: <20220613225723.2734132-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.476.g0c4daa206d-goog Subject: [PATCH 2/8] KVM: VMX: Refactor 32-bit PSE PT creation to avoid using MMU macro From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Compute the number of PTEs to be filled for the 32-bit PSE page tables using the page size and the size of each entry. While using the MMU's PT32_ENT_PER_PAGE macro is arguably better in isolation, removing VMX's usage will allow a future namespacing cleanup to move the guest page table macros into paging_tmpl.h, out of the reach of code that isn't directly related to shadow paging. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 5e14e4c40007..b774f8c1b952 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -3704,7 +3704,7 @@ static int init_rmode_identity_map(struct kvm *kvm) } /* Set up identity-mapping pagetable for EPT in real mode */ - for (i = 0; i < PT32_ENT_PER_PAGE; i++) { + for (i = 0; i < (PAGE_SIZE / sizeof(tmp)); i++) { tmp = (i << 22) + (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_PSE); if (__copy_to_user(uaddr + i * sizeof(tmp), &tmp, sizeof(tmp))) { From patchwork Mon Jun 13 22:57:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12880278 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55A7BC43334 for ; Mon, 13 Jun 2022 23:01:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244424AbiFMW6S (ORCPT ); Mon, 13 Jun 2022 18:58:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57390 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235195AbiFMW5g (ORCPT ); Mon, 13 Jun 2022 18:57:36 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D3D9BF46 for ; Mon, 13 Jun 2022 15:57:31 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id n8-20020a635908000000b00401a7b6235bso4023579pgb.5 for ; Mon, 13 Jun 2022 15:57:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=v4qCXjacc6DdzOgMgc8SdnwLNJ9D7HWzM5sk7a+r0QU=; b=tKbylohEMB3Jl6tpTMwQ6d8evAHk5Dz/0QN9wd9DmbHW4KeZ6M8jdnKU50lHcBNgDO z2GGrj64iIF78JHH1W4PJGeb1jnfnzpCv077HHYlpbJAxdUslC67ezPo0J76TrNk5NNA St3KeWrfmvgSpjMUGHivfOtU2H8Yjn53K//iZgUMaBp5OmnIZKbnZUzZccXhoWAN8GPq +XQsqANyvpke9U67fkPrTV+aRULg1yZHcHjcQKdzWNqcdQek4u+1yxVBqk98rx73dbYq 0x3oL4RGTiUCD6foo02yQOQqggQXNuhl4KfdidThn7+p6PBv/T5/pRZa0GHDO396q7j1 S3wg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=v4qCXjacc6DdzOgMgc8SdnwLNJ9D7HWzM5sk7a+r0QU=; b=UqbBL5IiArfpQL8Ug/Av+abJqhHCpp1DYp7nzIieX3rhjDEi5LQeWneyJ8XzIOC/xd 4O0MWJHuLE0g9BzqWxyAETajyFCLAkTomV5wku1k/33cFkzi4qgbbm938d01/vJqChpb ijSEAfDYPp7Dj1FYmIMxdMj9Kg/ziHcr22fbNhxgJR2/ACcetF5SU8522q+m5TGRXphQ YbtLTOm6UFruishWPDuS3QbN1YwGTxFkkQiagHsUU9HtE5iXhSAZR4ZQS92R+4tCM025 aB/wIhRcWaNO8QDhTSoeRt3QEKNY0mokUSbXJ6MpYaq9/2rWK6JNcMOdgla8RossFajV Nx9Q== X-Gm-Message-State: AOAM532GCE/qfY8kPJyz7htzJYXZFDn916Sl3LlIgn6eh/fGxAEVEbWk lqGFJ2qU9ovwFQ9j9gAw6kbIMketALM= X-Google-Smtp-Source: ABdhPJzmxrTOiQI7sxpaSIbvdsIoS7RfkibMVQwFP2P08oMwufrTOwRKrpZkn+gUpDEhaIBvYQ0wu2dxWCc= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a05:6a00:164c:b0:50a:472a:6b0a with SMTP id m12-20020a056a00164c00b0050a472a6b0amr1466395pfc.77.1655161051343; Mon, 13 Jun 2022 15:57:31 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 13 Jun 2022 22:57:18 +0000 In-Reply-To: <20220613225723.2734132-1-seanjc@google.com> Message-Id: <20220613225723.2734132-4-seanjc@google.com> Mime-Version: 1.0 References: <20220613225723.2734132-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.476.g0c4daa206d-goog Subject: [PATCH 3/8] KVM: x86/mmu: Bury 32-bit PSE paging helpers in paging_tmpl.h From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move a handful of one-off macros and helpers for 32-bit PSE paging into paging_tmpl.h and hide them behind "PTTYPE == 32". Under no circumstance should anything but 32-bit shadow paging care about PSE paging. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu.h | 5 ----- arch/x86/kvm/mmu/mmu.c | 12 ------------ arch/x86/kvm/mmu/paging_tmpl.h | 19 ++++++++++++++++++- 3 files changed, 18 insertions(+), 18 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index f8192864b496..d1021e34ac15 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -34,11 +34,6 @@ #define PT_DIR_PAT_SHIFT 12 #define PT_DIR_PAT_MASK (1ULL << PT_DIR_PAT_SHIFT) -#define PT32_DIR_PSE36_SIZE 4 -#define PT32_DIR_PSE36_SHIFT 13 -#define PT32_DIR_PSE36_MASK \ - (((1ULL << PT32_DIR_PSE36_SIZE) - 1) << PT32_DIR_PSE36_SHIFT) - #define PT64_ROOT_5LEVEL 5 #define PT64_ROOT_4LEVEL 4 #define PT32_ROOT_LEVEL 2 diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 17252f39bd7c..f1961fe3fe67 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -321,18 +321,6 @@ static bool check_mmio_spte(struct kvm_vcpu *vcpu, u64 spte) return likely(kvm_gen == spte_gen); } -static int is_cpuid_PSE36(void) -{ - return 1; -} - -static gfn_t pse36_gfn_delta(u32 gpte) -{ - int shift = 32 - PT32_DIR_PSE36_SHIFT - PAGE_SHIFT; - - return (gpte & PT32_DIR_PSE36_MASK) << shift; -} - #ifdef CONFIG_X86_64 static void __set_spte(u64 *sptep, u64 spte) { diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index f595c4b8657f..ef02e6bb0bcb 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -50,6 +50,12 @@ #define PT_GUEST_DIRTY_SHIFT PT_DIRTY_SHIFT #define PT_GUEST_ACCESSED_SHIFT PT_ACCESSED_SHIFT #define PT_HAVE_ACCESSED_DIRTY(mmu) true + + #define is_cpuid_PSE36() true + #define PT32_DIR_PSE36_SIZE 4 + #define PT32_DIR_PSE36_SHIFT 13 + #define PT32_DIR_PSE36_MASK \ + (((1ULL << PT32_DIR_PSE36_SIZE) - 1) << PT32_DIR_PSE36_SHIFT) #elif PTTYPE == PTTYPE_EPT #define pt_element_t u64 #define guest_walker guest_walkerEPT @@ -92,6 +98,15 @@ struct guest_walker { struct x86_exception fault; }; +#if PTTYPE == 32 +static inline gfn_t pse36_gfn_delta(u32 gpte) +{ + int shift = 32 - PT32_DIR_PSE36_SHIFT - PAGE_SHIFT; + + return (gpte & PT32_DIR_PSE36_MASK) << shift; +} +#endif + static gfn_t gpte_to_gfn_lvl(pt_element_t gpte, int lvl) { return (gpte & PT_LVL_ADDR_MASK(lvl)) >> PAGE_SHIFT; @@ -416,8 +431,10 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, gfn = gpte_to_gfn_lvl(pte, walker->level); gfn += (addr & PT_LVL_OFFSET_MASK(walker->level)) >> PAGE_SHIFT; - if (PTTYPE == 32 && walker->level > PG_LEVEL_4K && is_cpuid_PSE36()) +#if PTTYPE == 32 + if (walker->level > PG_LEVEL_4K && is_cpuid_PSE36()) gfn += pse36_gfn_delta(pte); +#endif real_gpa = kvm_translate_gpa(vcpu, mmu, gfn_to_gpa(gfn), access, &walker->fault); if (real_gpa == UNMAPPED_GVA) From patchwork Mon Jun 13 22:57:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12880276 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CFEBCCA47B for ; Mon, 13 Jun 2022 22:58:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242327AbiFMW6P (ORCPT ); Mon, 13 Jun 2022 18:58:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57414 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235333AbiFMW5h (ORCPT ); Mon, 13 Jun 2022 18:57:37 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 596F225CB for ; Mon, 13 Jun 2022 15:57:33 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id u6-20020a63d346000000b00407d7652203so2100496pgi.18 for ; Mon, 13 Jun 2022 15:57:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=bo31PGP16JR8fO1mwvO3YvvmHEE1iup/pKqJeMLN6sA=; b=e6mdfRC+yH8DEWKDztiUFCOlT+hLyEMdHwuptEyVZvR7QErW1ltJvFMXWlbIAQ4lHk fdDqosiVo/VvRX2iE5aDMrJNpDubEIcXcMdNNA/0okoOvrSjGMRXYe4W1zkuAk5/BZp2 gnhSE7S3N7sIdLVMWGaTn5VGHy3W2iUCuqlT73kH3PdWRutT1JjLcfvgdGaYX68NyyHE DQxB3IZ3F0jx3PdAT4BQ7grTnkyjeShTZCWLd7oRwuGGM/neE1o60pxJeijrzYCK4cxq fPdHCnHZjYU9Bk64kKdbG9A9js+5u65/J4O9KULfONepX4MT4682ciloSt125Ndg/dmU tJmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=bo31PGP16JR8fO1mwvO3YvvmHEE1iup/pKqJeMLN6sA=; b=RBvAVOjuUKjOpXjxPZHr5IPwlZmgsbHHrU0Lsd/W+0peLBo+dXv3YCdfifvS6nRff9 Q8TWgqWOoK/Xdl2zKKo/GQeg7VJEzEgz/6xGW38PfCl0Y6dn89H2GlzQtQmbuyzMLiZe 94JhagpxhC8nzdOnRNJUwCWZrDq6MoY91WNV29u/i3/5jyBOw1ghgZZ0mk8MKX1OwpRX 0ifswcqa7K+whnmzMyrx8QOShVCMDkK5WaFJNTaWccMTpWElfa3zIUZt/ar8B2LQmsLE KyrSGnyZtDyVGjGJL0r6EfYuWC8Ba5xf5JdqOSvs2SvxTMxU6aBTKMHVVFjOZ5JsD/0M hphQ== X-Gm-Message-State: AOAM533pOjaNNPY9G9RnEzoJ/qC1624YB7EtOoY9ljNC3/iYimbAOE3e bPuyBi+Sin7mOzTqbbSDIBTGjQgCa6c= X-Google-Smtp-Source: ABdhPJwWEaECxcGZhw2o4gKX7Dh5diIt/fzvRU1hWYTIrwPktOsvYDx1piS89RKElwqnExMrOQPTBO35hlY= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:aa7:831d:0:b0:51e:ec7a:82a7 with SMTP id bk29-20020aa7831d000000b0051eec7a82a7mr1583270pfb.51.1655161053132; Mon, 13 Jun 2022 15:57:33 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 13 Jun 2022 22:57:19 +0000 In-Reply-To: <20220613225723.2734132-1-seanjc@google.com> Message-Id: <20220613225723.2734132-5-seanjc@google.com> Mime-Version: 1.0 References: <20220613225723.2734132-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.476.g0c4daa206d-goog Subject: [PATCH 4/8] KVM: x86/mmu: Dedup macros for computing various page table masks From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Provide common helper macros to generate various masks, shifts, etc... for 32-bit vs. 64-bit page tables. Only the inputs differ, the actual calculations are identical. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu.h | 4 ++-- arch/x86/kvm/mmu/mmu.c | 15 ++++++--------- arch/x86/kvm/mmu/mmu_internal.h | 14 ++++++++++++++ arch/x86/kvm/mmu/paging.h | 9 +++++---- arch/x86/kvm/mmu/spte.h | 7 +++---- 5 files changed, 30 insertions(+), 19 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index d1021e34ac15..6efe6bd7fb6e 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -7,9 +7,9 @@ #include "cpuid.h" #define PT64_PT_BITS 9 -#define PT64_ENT_PER_PAGE (1 << PT64_PT_BITS) +#define PT64_ENT_PER_PAGE __PT_ENT_PER_PAGE(PT64_PT_BITS) #define PT32_PT_BITS 10 -#define PT32_ENT_PER_PAGE (1 << PT32_PT_BITS) +#define PT32_ENT_PER_PAGE __PT_ENT_PER_PAGE(PT32_PT_BITS) #define PT_WRITABLE_SHIFT 1 #define PT_USER_SHIFT 2 diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f1961fe3fe67..afe3deaa0d95 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -113,23 +113,20 @@ module_param(dbg, bool, 0644); #define PT32_LEVEL_BITS 10 -#define PT32_LEVEL_SHIFT(level) \ - (PAGE_SHIFT + (level - 1) * PT32_LEVEL_BITS) +#define PT32_LEVEL_SHIFT(level) __PT_LEVEL_SHIFT(level, PT32_LEVEL_BITS) #define PT32_LVL_OFFSET_MASK(level) \ - (PT32_BASE_ADDR_MASK & ((1ULL << (PAGE_SHIFT + (((level) - 1) \ - * PT32_LEVEL_BITS))) - 1)) - -#define PT32_INDEX(address, level)\ - (((address) >> PT32_LEVEL_SHIFT(level)) & ((1 << PT32_LEVEL_BITS) - 1)) + __PT_LVL_OFFSET_MASK(PT32_BASE_ADDR_MASK, level, PT32_LEVEL_BITS) +#define PT32_INDEX(address, level) __PT_INDEX(address, level, PT32_LEVEL_BITS) #define PT32_BASE_ADDR_MASK PAGE_MASK + #define PT32_DIR_BASE_ADDR_MASK \ (PAGE_MASK & ~((1ULL << (PAGE_SHIFT + PT32_LEVEL_BITS)) - 1)) + #define PT32_LVL_ADDR_MASK(level) \ - (PAGE_MASK & ~((1ULL << (PAGE_SHIFT + (((level) - 1) \ - * PT32_LEVEL_BITS))) - 1)) + __PT_LVL_ADDR_MASK(PT32_BASE_ADDR_MASK, level, PT32_LEVEL_BITS) #include diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index bd2a26897b97..5e1e3c8f8aaa 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -20,6 +20,20 @@ extern bool dbg; #define MMU_WARN_ON(x) do { } while (0) #endif +/* Page table builder macros common to shadow (host) PTEs and guest PTEs. */ +#define __PT_LEVEL_SHIFT(level, bits_per_level) \ + (PAGE_SHIFT + ((level) - 1) * (bits_per_level)) +#define __PT_INDEX(address, level, bits_per_level) \ + (((address) >> __PT_LEVEL_SHIFT(level, bits_per_level)) & ((1 << (bits_per_level)) - 1)) + +#define __PT_LVL_ADDR_MASK(base_addr_mask, level, bits_per_level) \ + ((base_addr_mask) & ~((1ULL << (PAGE_SHIFT + (((level) - 1) * (bits_per_level)))) - 1)) + +#define __PT_LVL_OFFSET_MASK(base_addr_mask, level, bits_per_level) \ + ((base_addr_mask) & ((1ULL << (PAGE_SHIFT + (((level) - 1) * (bits_per_level)))) - 1)) + +#define __PT_ENT_PER_PAGE(bits_per_level) (1 << (bits_per_level)) + /* * Unlike regular MMU roots, PAE "roots", a.k.a. PDPTEs/PDPTRs, have a PRESENT * bit, and thus are guaranteed to be non-zero when valid. And, when a guest diff --git a/arch/x86/kvm/mmu/paging.h b/arch/x86/kvm/mmu/paging.h index de8ab323bb70..23f3f64b8092 100644 --- a/arch/x86/kvm/mmu/paging.h +++ b/arch/x86/kvm/mmu/paging.h @@ -4,11 +4,12 @@ #define __KVM_X86_PAGING_H #define GUEST_PT64_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1)) + #define PT64_LVL_ADDR_MASK(level) \ - (GUEST_PT64_BASE_ADDR_MASK & ~((1ULL << (PAGE_SHIFT + (((level) - 1) \ - * PT64_LEVEL_BITS))) - 1)) + __PT_LVL_ADDR_MASK(GUEST_PT64_BASE_ADDR_MASK, level, PT64_LEVEL_BITS) + #define PT64_LVL_OFFSET_MASK(level) \ - (GUEST_PT64_BASE_ADDR_MASK & ((1ULL << (PAGE_SHIFT + (((level) - 1) \ - * PT64_LEVEL_BITS))) - 1)) + __PT_LVL_OFFSET_MASK(GUEST_PT64_BASE_ADDR_MASK, level, PT64_LEVEL_BITS) + #endif /* __KVM_X86_PAGING_H */ diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 0127bb6e3c7d..d5a8183b7232 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -55,11 +55,10 @@ static_assert(SPTE_TDP_AD_ENABLED_MASK == 0); #define PT64_LEVEL_BITS 9 -#define PT64_LEVEL_SHIFT(level) \ - (PAGE_SHIFT + (level - 1) * PT64_LEVEL_BITS) +#define PT64_LEVEL_SHIFT(level) __PT_LEVEL_SHIFT(level, PT64_LEVEL_BITS) + +#define PT64_INDEX(address, level) __PT_INDEX(address, level, PT64_LEVEL_BITS) -#define PT64_INDEX(address, level)\ - (((address) >> PT64_LEVEL_SHIFT(level)) & ((1 << PT64_LEVEL_BITS) - 1)) #define SHADOW_PT_INDEX(addr, level) PT64_INDEX(addr, level) /* From patchwork Mon Jun 13 22:57:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12880280 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A150CCCA47E for ; Mon, 13 Jun 2022 23:01:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245451AbiFMW6X (ORCPT ); Mon, 13 Jun 2022 18:58:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57544 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236174AbiFMW5j (ORCPT ); Mon, 13 Jun 2022 18:57:39 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7727A2BCD for ; Mon, 13 Jun 2022 15:57:35 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-30c14765d55so10387867b3.13 for ; Mon, 13 Jun 2022 15:57:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=Wfuc2yFn+6O12rprpPrZ8DBVEL/9MJ9/SEFQWLvRY/Y=; b=NLjWdNgMUVtColhZ3Cid0dk+TpgaOrCAwS0HbYCbF9buRnePY2t5abSTPGNIxNWP8G Di9At1B8NTvFF/TPhh+/a06t2KX+kh7xbG0oMj3SBmZ9hZr1/Jqsreyl5WtzHbKRI0tQ NjA8wh/sMIrIQ+zHh4BUnXT+1+QESzCuylWC49r8NDXDxmj5/IUp6EtwBqcRxA87Z+vs jbxh0YaqE3V1hzPXiuLC2gsdZZgKpF90MagFlgytxU5qh97Gs31Gn8rG7YdEF/GkDK7X 3DmQyTyrTXPUQCMJCzIdVDzQ4VFMKwq+XcLuM1XYVOkl16z3lYVKve17GjfiQrJH+38n HG9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=Wfuc2yFn+6O12rprpPrZ8DBVEL/9MJ9/SEFQWLvRY/Y=; b=z4jviEfxcRBOJNYPHQb9TH5pzfWWz8LU/7DG8Tr1udQRBfdGBBGu3B7eh+z3UTi53Z 6kitm9ZM3kWHyy8dM0yLfHk1mIp3d53s7rDFW0U59SNTJtIIwM/2PVroZOGK/h9OpyPT 9/TY0PCCvvsSTsibR2hBORsHkztX3TE3dX7mJ94Lo/BjT3kkhG7tQU2yAwE2fHr83Y9C NHBtIt27J9NpKTBQ6aMZbfFYd9+K9pKPubt+3DTLvpZcVnlQ93Es80h61X20pahY1AsO w0bkY8Kq3nUm4mo+/0TZyB5NvMHopwCGKoktjZ0czjRUZb2GVIVabPJaIle1yOANHL+J XHzw== X-Gm-Message-State: AJIora+3VdwqbXBa86SvhxqOPplzOObC6MeDjE905R9ug+Dq0DbJeYTU X7SGFsthE4o2GYr0MHW0T6HGNVc4H0s= X-Google-Smtp-Source: AGRyM1t4TsqDBsBtRrvFjoZMpjvl2A1b6s8M1LkHPgyxRN9zOV2au2u+5wxwy1kehqKiSq5mvhcqr4QblqU= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a0d:d994:0:b0:313:3a26:8336 with SMTP id b142-20020a0dd994000000b003133a268336mr2405485ywe.444.1655161054817; Mon, 13 Jun 2022 15:57:34 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 13 Jun 2022 22:57:20 +0000 In-Reply-To: <20220613225723.2734132-1-seanjc@google.com> Message-Id: <20220613225723.2734132-6-seanjc@google.com> Mime-Version: 1.0 References: <20220613225723.2734132-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.476.g0c4daa206d-goog Subject: [PATCH 5/8] KVM: x86/mmu: Use separate namespaces for guest PTEs and shadow PTEs From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Separate the macros for KVM's shadow PTEs (SPTE) from guest 64-bit PTEs (PT64). SPTE and PT64 are _mostly_ the same, but the few differences are quite critical, e.g. *_BASE_ADDR_MASK must differentiate between host and guest physical address spaces, and SPTE_PERM_MASK (was PT64_PERM_MASK) is very much specific to SPTEs. Opportunistically (and temporarily) move most guest macros into paging.h to clearly associate them with shadow paging, and to ensure that they're not used as of this commit. A future patch will eliminate them entirely. Sadly, PT32_LEVEL_BITS is left behind in mmu_internal.h because it's needed for the quadrant calculation in kvm_mmu_get_page(). The quadrant calculation is hot enough (when using shadow paging with 32-bit guests) that adding a per-context helper is undesirable, and burying the computation in paging_tmpl.h with a forward declaration isn't exactly an improvement. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu.h | 5 ---- arch/x86/kvm/mmu/mmu.c | 47 +++++++++++---------------------- arch/x86/kvm/mmu/mmu_internal.h | 3 +++ arch/x86/kvm/mmu/paging.h | 20 ++++++++++++++ arch/x86/kvm/mmu/paging_tmpl.h | 4 +-- arch/x86/kvm/mmu/spte.c | 2 +- arch/x86/kvm/mmu/spte.h | 27 +++++++++---------- arch/x86/kvm/mmu/tdp_iter.c | 6 ++--- arch/x86/kvm/mmu/tdp_mmu.c | 6 ++--- 9 files changed, 59 insertions(+), 61 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 6efe6bd7fb6e..a99acec925eb 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -6,11 +6,6 @@ #include "kvm_cache_regs.h" #include "cpuid.h" -#define PT64_PT_BITS 9 -#define PT64_ENT_PER_PAGE __PT_ENT_PER_PAGE(PT64_PT_BITS) -#define PT32_PT_BITS 10 -#define PT32_ENT_PER_PAGE __PT_ENT_PER_PAGE(PT32_PT_BITS) - #define PT_WRITABLE_SHIFT 1 #define PT_USER_SHIFT 2 diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index afe3deaa0d95..aedb8d871030 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -111,23 +111,6 @@ module_param(dbg, bool, 0644); #define PTE_PREFETCH_NUM 8 -#define PT32_LEVEL_BITS 10 - -#define PT32_LEVEL_SHIFT(level) __PT_LEVEL_SHIFT(level, PT32_LEVEL_BITS) - -#define PT32_LVL_OFFSET_MASK(level) \ - __PT_LVL_OFFSET_MASK(PT32_BASE_ADDR_MASK, level, PT32_LEVEL_BITS) - -#define PT32_INDEX(address, level) __PT_INDEX(address, level, PT32_LEVEL_BITS) - -#define PT32_BASE_ADDR_MASK PAGE_MASK - -#define PT32_DIR_BASE_ADDR_MASK \ - (PAGE_MASK & ~((1ULL << (PAGE_SHIFT + PT32_LEVEL_BITS)) - 1)) - -#define PT32_LVL_ADDR_MASK(level) \ - __PT_LVL_ADDR_MASK(PT32_BASE_ADDR_MASK, level, PT32_LEVEL_BITS) - #include /* make pte_list_desc fit well in cache lines */ @@ -702,7 +685,7 @@ static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) if (!sp->role.direct) return sp->gfns[index]; - return sp->gfn + (index << ((sp->role.level - 1) * PT64_LEVEL_BITS)); + return sp->gfn + (index << ((sp->role.level - 1) * SPTE_LEVEL_BITS)); } static void kvm_mmu_page_set_gfn(struct kvm_mmu_page *sp, int index, gfn_t gfn) @@ -1774,7 +1757,7 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp, continue; } - child = to_shadow_page(ent & PT64_BASE_ADDR_MASK); + child = to_shadow_page(ent & SPTE_BASE_ADDR_MASK); if (child->unsync_children) { if (mmu_pages_add(pvec, child, i)) @@ -2025,8 +2008,8 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, role.direct = direct; role.access = access; if (role.has_4_byte_gpte) { - quadrant = gaddr >> (PAGE_SHIFT + (PT64_PT_BITS * level)); - quadrant &= (1 << ((PT32_PT_BITS - PT64_PT_BITS) * level)) - 1; + quadrant = gaddr >> (PAGE_SHIFT + (SPTE_LEVEL_BITS * level)); + quadrant &= (1 << ((PT32_LEVEL_BITS - SPTE_LEVEL_BITS) * level)) - 1; role.quadrant = quadrant; } if (level <= vcpu->arch.mmu->cpu_role.base.level) @@ -2130,7 +2113,7 @@ static void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterato iterator->shadow_addr = vcpu->arch.mmu->pae_root[(addr >> 30) & 3]; - iterator->shadow_addr &= PT64_BASE_ADDR_MASK; + iterator->shadow_addr &= SPTE_BASE_ADDR_MASK; --iterator->level; if (!iterator->shadow_addr) iterator->level = 0; @@ -2149,7 +2132,7 @@ static bool shadow_walk_okay(struct kvm_shadow_walk_iterator *iterator) if (iterator->level < PG_LEVEL_4K) return false; - iterator->index = SHADOW_PT_INDEX(iterator->addr, iterator->level); + iterator->index = SPTE_INDEX(iterator->addr, iterator->level); iterator->sptep = ((u64 *)__va(iterator->shadow_addr)) + iterator->index; return true; } @@ -2162,7 +2145,7 @@ static void __shadow_walk_next(struct kvm_shadow_walk_iterator *iterator, return; } - iterator->shadow_addr = spte & PT64_BASE_ADDR_MASK; + iterator->shadow_addr = spte & SPTE_BASE_ADDR_MASK; --iterator->level; } @@ -2201,7 +2184,7 @@ static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep, * so we should update the spte at this point to get * a new sp with the correct access. */ - child = to_shadow_page(*sptep & PT64_BASE_ADDR_MASK); + child = to_shadow_page(*sptep & SPTE_BASE_ADDR_MASK); if (child->role.access == direct_access) return; @@ -2222,7 +2205,7 @@ static int mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp, if (is_last_spte(pte, sp->role.level)) { drop_spte(kvm, spte); } else { - child = to_shadow_page(pte & PT64_BASE_ADDR_MASK); + child = to_shadow_page(pte & SPTE_BASE_ADDR_MASK); drop_parent_pte(child, spte); /* @@ -2248,7 +2231,7 @@ static int kvm_mmu_page_unlink_children(struct kvm *kvm, int zapped = 0; unsigned i; - for (i = 0; i < PT64_ENT_PER_PAGE; ++i) + for (i = 0; i < SPTE_ENT_PER_PAGE; ++i) zapped += mmu_page_zap_pte(kvm, sp, sp->spt + i, invalid_list); return zapped; @@ -2661,7 +2644,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, struct kvm_mmu_page *child; u64 pte = *sptep; - child = to_shadow_page(pte & PT64_BASE_ADDR_MASK); + child = to_shadow_page(pte & SPTE_BASE_ADDR_MASK); drop_parent_pte(child, sptep); flush = true; } else if (pfn != spte_to_pfn(*sptep)) { @@ -3250,7 +3233,7 @@ static void mmu_free_root_page(struct kvm *kvm, hpa_t *root_hpa, if (!VALID_PAGE(*root_hpa)) return; - sp = to_shadow_page(*root_hpa & PT64_BASE_ADDR_MASK); + sp = to_shadow_page(*root_hpa & SPTE_BASE_ADDR_MASK); if (WARN_ON(!sp)) return; @@ -3722,7 +3705,7 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu) hpa_t root = vcpu->arch.mmu->pae_root[i]; if (IS_VALID_PAE_ROOT(root)) { - root &= PT64_BASE_ADDR_MASK; + root &= SPTE_BASE_ADDR_MASK; sp = to_shadow_page(root); mmu_sync_children(vcpu, sp, true); } @@ -5184,11 +5167,11 @@ static bool need_remote_flush(u64 old, u64 new) return false; if (!is_shadow_present_pte(new)) return true; - if ((old ^ new) & PT64_BASE_ADDR_MASK) + if ((old ^ new) & SPTE_BASE_ADDR_MASK) return true; old ^= shadow_nx_mask; new ^= shadow_nx_mask; - return (old & ~new & PT64_PERM_MASK) != 0; + return (old & ~new & SPTE_PERM_MASK) != 0; } static u64 mmu_pte_write_fetch_gpte(struct kvm_vcpu *vcpu, gpa_t *gpa, diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 5e1e3c8f8aaa..cb9d4d358335 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -20,6 +20,9 @@ extern bool dbg; #define MMU_WARN_ON(x) do { } while (0) #endif +/* The number of bits for 32-bit PTEs is to needed compute the quandrant. */ +#define PT32_LEVEL_BITS 10 + /* Page table builder macros common to shadow (host) PTEs and guest PTEs. */ #define __PT_LEVEL_SHIFT(level, bits_per_level) \ (PAGE_SHIFT + ((level) - 1) * (bits_per_level)) diff --git a/arch/x86/kvm/mmu/paging.h b/arch/x86/kvm/mmu/paging.h index 23f3f64b8092..3fed2c101de3 100644 --- a/arch/x86/kvm/mmu/paging.h +++ b/arch/x86/kvm/mmu/paging.h @@ -5,11 +5,31 @@ #define GUEST_PT64_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1)) +#define PT64_LEVEL_BITS 9 + +#define PT64_INDEX(address, level) __PT_INDEX(address, level, PT64_LEVEL_BITS) + #define PT64_LVL_ADDR_MASK(level) \ __PT_LVL_ADDR_MASK(GUEST_PT64_BASE_ADDR_MASK, level, PT64_LEVEL_BITS) #define PT64_LVL_OFFSET_MASK(level) \ __PT_LVL_OFFSET_MASK(GUEST_PT64_BASE_ADDR_MASK, level, PT64_LEVEL_BITS) + +#define PT32_LEVEL_SHIFT(level) __PT_LEVEL_SHIFT(level, PT32_LEVEL_BITS) + +#define PT32_LVL_OFFSET_MASK(level) \ + __PT_LVL_OFFSET_MASK(PT32_BASE_ADDR_MASK, level, PT32_LEVEL_BITS) + +#define PT32_INDEX(address, level) __PT_INDEX(address, level, PT32_LEVEL_BITS) + +#define PT32_BASE_ADDR_MASK PAGE_MASK + +#define PT32_DIR_BASE_ADDR_MASK \ + (PAGE_MASK & ~((1ULL << (PAGE_SHIFT + PT32_LEVEL_BITS)) - 1)) + +#define PT32_LVL_ADDR_MASK(level) \ + __PT_LVL_ADDR_MASK(PT32_BASE_ADDR_MASK, level, PT32_LEVEL_BITS) + #endif /* __KVM_X86_PAGING_H */ diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index ef02e6bb0bcb..75f6b01edcf8 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -900,7 +900,7 @@ static gpa_t FNAME(get_level1_sp_gpa)(struct kvm_mmu_page *sp) WARN_ON(sp->role.level != PG_LEVEL_4K); if (PTTYPE == 32) - offset = sp->role.quadrant << PT64_LEVEL_BITS; + offset = sp->role.quadrant << SPTE_LEVEL_BITS; return gfn_to_gpa(sp->gfn) + offset * sizeof(pt_element_t); } @@ -1035,7 +1035,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) first_pte_gpa = FNAME(get_level1_sp_gpa)(sp); - for (i = 0; i < PT64_ENT_PER_PAGE; i++) { + for (i = 0; i < SPTE_ENT_PER_PAGE; i++) { u64 *sptep, spte; struct kvm_memory_slot *slot; unsigned pte_access; diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index cda1851ec155..242e4828d7df 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -301,7 +301,7 @@ u64 kvm_mmu_changed_pte_notifier_make_spte(u64 old_spte, kvm_pfn_t new_pfn) { u64 new_spte; - new_spte = old_spte & ~PT64_BASE_ADDR_MASK; + new_spte = old_spte & ~SPTE_BASE_ADDR_MASK; new_spte |= (u64)new_pfn << PAGE_SHIFT; new_spte &= ~PT_WRITABLE_MASK; diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index d5a8183b7232..121c5eaaec77 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -36,12 +36,12 @@ extern bool __read_mostly enable_mmio_caching; static_assert(SPTE_TDP_AD_ENABLED_MASK == 0); #ifdef CONFIG_DYNAMIC_PHYSICAL_MASK -#define PT64_BASE_ADDR_MASK (physical_mask & ~(u64)(PAGE_SIZE-1)) +#define SPTE_BASE_ADDR_MASK (physical_mask & ~(u64)(PAGE_SIZE-1)) #else -#define PT64_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1)) +#define SPTE_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1)) #endif -#define PT64_PERM_MASK (PT_PRESENT_MASK | PT_WRITABLE_MASK | shadow_user_mask \ +#define SPTE_PERM_MASK (PT_PRESENT_MASK | PT_WRITABLE_MASK | shadow_user_mask \ | shadow_x_mask | shadow_nx_mask | shadow_me_mask) #define ACC_EXEC_MASK 1 @@ -50,16 +50,13 @@ static_assert(SPTE_TDP_AD_ENABLED_MASK == 0); #define ACC_ALL (ACC_EXEC_MASK | ACC_WRITE_MASK | ACC_USER_MASK) /* The mask for the R/X bits in EPT PTEs */ -#define PT64_EPT_READABLE_MASK 0x1ull -#define PT64_EPT_EXECUTABLE_MASK 0x4ull +#define SPTE_EPT_READABLE_MASK 0x1ull +#define SPTE_EPT_EXECUTABLE_MASK 0x4ull -#define PT64_LEVEL_BITS 9 - -#define PT64_LEVEL_SHIFT(level) __PT_LEVEL_SHIFT(level, PT64_LEVEL_BITS) - -#define PT64_INDEX(address, level) __PT_INDEX(address, level, PT64_LEVEL_BITS) - -#define SHADOW_PT_INDEX(addr, level) PT64_INDEX(addr, level) +#define SPTE_LEVEL_BITS 9 +#define SPTE_LEVEL_SHIFT(level) __PT_LEVEL_SHIFT(level, SPTE_LEVEL_BITS) +#define SPTE_INDEX(address, level) __PT_INDEX(address, level, SPTE_LEVEL_BITS) +#define SPTE_ENT_PER_PAGE __PT_ENT_PER_PAGE(SPTE_LEVEL_BITS) /* * The mask/shift to use for saving the original R/X bits when marking the PTE @@ -68,8 +65,8 @@ static_assert(SPTE_TDP_AD_ENABLED_MASK == 0); * restored only when a write is attempted to the page. This mask obviously * must not overlap the A/D type mask. */ -#define SHADOW_ACC_TRACK_SAVED_BITS_MASK (PT64_EPT_READABLE_MASK | \ - PT64_EPT_EXECUTABLE_MASK) +#define SHADOW_ACC_TRACK_SAVED_BITS_MASK (SPTE_EPT_READABLE_MASK | \ + SPTE_EPT_EXECUTABLE_MASK) #define SHADOW_ACC_TRACK_SAVED_BITS_SHIFT 54 #define SHADOW_ACC_TRACK_SAVED_MASK (SHADOW_ACC_TRACK_SAVED_BITS_MASK << \ SHADOW_ACC_TRACK_SAVED_BITS_SHIFT) @@ -281,7 +278,7 @@ static inline bool is_executable_pte(u64 spte) static inline kvm_pfn_t spte_to_pfn(u64 pte) { - return (pte & PT64_BASE_ADDR_MASK) >> PAGE_SHIFT; + return (pte & SPTE_BASE_ADDR_MASK) >> PAGE_SHIFT; } static inline bool is_accessed_spte(u64 spte) diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c index ee4802d7b36c..9c65a64a56d9 100644 --- a/arch/x86/kvm/mmu/tdp_iter.c +++ b/arch/x86/kvm/mmu/tdp_iter.c @@ -11,7 +11,7 @@ static void tdp_iter_refresh_sptep(struct tdp_iter *iter) { iter->sptep = iter->pt_path[iter->level - 1] + - SHADOW_PT_INDEX(iter->gfn << PAGE_SHIFT, iter->level); + SPTE_INDEX(iter->gfn << PAGE_SHIFT, iter->level); iter->old_spte = kvm_tdp_mmu_read_spte(iter->sptep); } @@ -116,8 +116,8 @@ static bool try_step_side(struct tdp_iter *iter) * Check if the iterator is already at the end of the current page * table. */ - if (SHADOW_PT_INDEX(iter->gfn << PAGE_SHIFT, iter->level) == - (PT64_ENT_PER_PAGE - 1)) + if (SPTE_INDEX(iter->gfn << PAGE_SHIFT, iter->level) == + (SPTE_ENT_PER_PAGE - 1)) return false; iter->gfn += KVM_PAGES_PER_HPAGE(iter->level); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 7b9265d67131..26cb9fed2f18 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -425,7 +425,7 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared) tdp_mmu_unlink_sp(kvm, sp, shared); - for (i = 0; i < PT64_ENT_PER_PAGE; i++) { + for (i = 0; i < SPTE_ENT_PER_PAGE; i++) { tdp_ptep_t sptep = pt + i; gfn_t gfn = base_gfn + i * KVM_PAGES_PER_HPAGE(level); u64 old_spte; @@ -1487,7 +1487,7 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, * No need for atomics when writing to sp->spt since the page table has * not been linked in yet and thus is not reachable from any other CPU. */ - for (i = 0; i < PT64_ENT_PER_PAGE; i++) + for (i = 0; i < SPTE_ENT_PER_PAGE; i++) sp->spt[i] = make_huge_page_split_spte(huge_spte, level, i); /* @@ -1507,7 +1507,7 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, * are overwriting from the page stats. But we have to manually update * the page stats with the new present child pages. */ - kvm_update_page_stats(kvm, level - 1, PT64_ENT_PER_PAGE); + kvm_update_page_stats(kvm, level - 1, SPTE_ENT_PER_PAGE); out: trace_kvm_mmu_split_huge_page(iter->gfn, huge_spte, level, ret); From patchwork Mon Jun 13 22:57:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12880277 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DFECC3F2D4 for ; Mon, 13 Jun 2022 22:58:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230388AbiFMW61 (ORCPT ); Mon, 13 Jun 2022 18:58:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57556 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236596AbiFMW5k (ORCPT ); Mon, 13 Jun 2022 18:57:40 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1E2C010F0 for ; Mon, 13 Jun 2022 15:57:37 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id cd16-20020a056a00421000b00520785db095so2918599pfb.15 for ; Mon, 13 Jun 2022 15:57:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=XfyrmAuBNG96ESd4ge8XFRdhqAAWlj/bUQmrHUvO0vg=; b=aB+mHJdxCtw86zizW4wBXbT1gFqQGoe0qOMmsjfrKUSWynkwhUxvP0niYP7qIZHIop TEd087yHrArSh02i6aFoQcQezKWHPMlYuE7ifax/CdCzxoL167+1Xbbn2H2J2qPv5bS0 57rClqQ3jgTawmKTUNi2mByyvnqGXucQSedyoI8rD1/bRS8q2JgxT5YFvtO8dqjPE3Eh pjDVeYsv7h50iP0tyG+9IIaNc3srWTjyYt3IrjeAmHhSzaJEEi0AAUTE9O/xAPrbsZ19 Ghf+uSGLF6CHyxIccu/zMK51xLO43hjPFHTqzTh1iZNdvr78E0L9Rpew4+eNhrGAto9r oc6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=XfyrmAuBNG96ESd4ge8XFRdhqAAWlj/bUQmrHUvO0vg=; b=FQPDUFGbCUurR/Puxun8GjPlEArglAnh3Fu1aInUJTtAO5OIJwZMBiWykRojEveeMA JnDAcQy0YCgBgjopbHEFhPQwCLKynH61YHk81K3lXQxyobZnkBpqT/WS3/8VXeMurT5n R3Qxggjc9amC3W1heJA9isMc0v7K2heOCNUfXIRZ4D/CLZkiHUXPZ2VZaeBgEj9ac5oG uaKxX+1o5PE6VMnSjpi3m3H708M0cqN56MtNZgbm2m356qSgL1MGFfD/oG1WclB4Z2Ap RohUSNtJxR8s3ooCK0bI7JJggjxRo8qKrlUGWFctrFVfI2/dmtg8oblghJDxpOdbAc2l i3GQ== X-Gm-Message-State: AJIora/tNbLhT9k0d7zjE0yOzQyMladIbfLtiLcIMaxej0r4Id5EAXQH P1YzdA2JGVBBBuwCOf2x9Kn4R0nfOu4= X-Google-Smtp-Source: AGRyM1urB2wXKYaVwBFbTVFrOgFVE+BwUDOaPZYhK78OGINuG4bhotXlqzGx0Puv+VHs0tpYMhvfFoKKHxk= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:902:dac7:b0:166:4ce4:7e32 with SMTP id q7-20020a170902dac700b001664ce47e32mr1389307plx.168.1655161056535; Mon, 13 Jun 2022 15:57:36 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 13 Jun 2022 22:57:21 +0000 In-Reply-To: <20220613225723.2734132-1-seanjc@google.com> Message-Id: <20220613225723.2734132-7-seanjc@google.com> Mime-Version: 1.0 References: <20220613225723.2734132-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.476.g0c4daa206d-goog Subject: [PATCH 6/8] KVM: x86/mmu: Use common macros to compute 32/64-bit paging masks From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Dedup the code for generating (most of) the per-type PT_* masks in paging_tmpl.h. The relevant macros only vary based on the number of bits per level, and that smidge of info is already provided in a common form as PT_LEVEL_BITS. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/paging.h | 26 -------------------------- arch/x86/kvm/mmu/paging_tmpl.h | 25 +++++++++++-------------- 2 files changed, 11 insertions(+), 40 deletions(-) diff --git a/arch/x86/kvm/mmu/paging.h b/arch/x86/kvm/mmu/paging.h index 3fed2c101de3..9de4976b2d46 100644 --- a/arch/x86/kvm/mmu/paging.h +++ b/arch/x86/kvm/mmu/paging.h @@ -5,31 +5,5 @@ #define GUEST_PT64_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1)) -#define PT64_LEVEL_BITS 9 - -#define PT64_INDEX(address, level) __PT_INDEX(address, level, PT64_LEVEL_BITS) - -#define PT64_LVL_ADDR_MASK(level) \ - __PT_LVL_ADDR_MASK(GUEST_PT64_BASE_ADDR_MASK, level, PT64_LEVEL_BITS) - -#define PT64_LVL_OFFSET_MASK(level) \ - __PT_LVL_OFFSET_MASK(GUEST_PT64_BASE_ADDR_MASK, level, PT64_LEVEL_BITS) - - -#define PT32_LEVEL_SHIFT(level) __PT_LEVEL_SHIFT(level, PT32_LEVEL_BITS) - -#define PT32_LVL_OFFSET_MASK(level) \ - __PT_LVL_OFFSET_MASK(PT32_BASE_ADDR_MASK, level, PT32_LEVEL_BITS) - -#define PT32_INDEX(address, level) __PT_INDEX(address, level, PT32_LEVEL_BITS) - -#define PT32_BASE_ADDR_MASK PAGE_MASK - -#define PT32_DIR_BASE_ADDR_MASK \ - (PAGE_MASK & ~((1ULL << (PAGE_SHIFT + PT32_LEVEL_BITS)) - 1)) - -#define PT32_LVL_ADDR_MASK(level) \ - __PT_LVL_ADDR_MASK(PT32_BASE_ADDR_MASK, level, PT32_LEVEL_BITS) - #endif /* __KVM_X86_PAGING_H */ diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 75f6b01edcf8..0bb2a6c97ebb 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -16,8 +16,9 @@ */ /* - * We need the mmu code to access both 32-bit and 64-bit guest ptes, - * so the code in this file is compiled twice, once per pte size. + * The MMU needs to be able to access/walk 32-bit and 64-bit guest page tables, + * as well as guest EPT tables, so the code in this file is compiled thrice, + * once per guest PTE type. The per-type defines are #undef'd at the end. */ #if PTTYPE == 64 @@ -25,10 +26,7 @@ #define guest_walker guest_walker64 #define FNAME(name) paging##64_##name #define PT_BASE_ADDR_MASK GUEST_PT64_BASE_ADDR_MASK - #define PT_LVL_ADDR_MASK(lvl) PT64_LVL_ADDR_MASK(lvl) - #define PT_LVL_OFFSET_MASK(lvl) PT64_LVL_OFFSET_MASK(lvl) - #define PT_INDEX(addr, level) PT64_INDEX(addr, level) - #define PT_LEVEL_BITS PT64_LEVEL_BITS + #define PT_LEVEL_BITS 9 #define PT_GUEST_DIRTY_SHIFT PT_DIRTY_SHIFT #define PT_GUEST_ACCESSED_SHIFT PT_ACCESSED_SHIFT #define PT_HAVE_ACCESSED_DIRTY(mmu) true @@ -41,10 +39,7 @@ #define pt_element_t u32 #define guest_walker guest_walker32 #define FNAME(name) paging##32_##name - #define PT_BASE_ADDR_MASK PT32_BASE_ADDR_MASK - #define PT_LVL_ADDR_MASK(lvl) PT32_LVL_ADDR_MASK(lvl) - #define PT_LVL_OFFSET_MASK(lvl) PT32_LVL_OFFSET_MASK(lvl) - #define PT_INDEX(addr, level) PT32_INDEX(addr, level) + #define PT_BASE_ADDR_MASK PAGE_MASK #define PT_LEVEL_BITS PT32_LEVEL_BITS #define PT_MAX_FULL_LEVELS 2 #define PT_GUEST_DIRTY_SHIFT PT_DIRTY_SHIFT @@ -61,10 +56,7 @@ #define guest_walker guest_walkerEPT #define FNAME(name) ept_##name #define PT_BASE_ADDR_MASK GUEST_PT64_BASE_ADDR_MASK - #define PT_LVL_ADDR_MASK(lvl) PT64_LVL_ADDR_MASK(lvl) - #define PT_LVL_OFFSET_MASK(lvl) PT64_LVL_OFFSET_MASK(lvl) - #define PT_INDEX(addr, level) PT64_INDEX(addr, level) - #define PT_LEVEL_BITS PT64_LEVEL_BITS + #define PT_LEVEL_BITS 9 #define PT_GUEST_DIRTY_SHIFT 9 #define PT_GUEST_ACCESSED_SHIFT 8 #define PT_HAVE_ACCESSED_DIRTY(mmu) (!(mmu)->cpu_role.base.ad_disabled) @@ -73,6 +65,11 @@ #error Invalid PTTYPE value #endif +/* Common logic, but per-type values. These also need to be undefined. */ +#define PT_LVL_ADDR_MASK(lvl) __PT_LVL_ADDR_MASK(PT_BASE_ADDR_MASK, lvl, PT_LEVEL_BITS) +#define PT_LVL_OFFSET_MASK(lvl) __PT_LVL_OFFSET_MASK(PT_BASE_ADDR_MASK, lvl, PT_LEVEL_BITS) +#define PT_INDEX(addr, lvl) __PT_INDEX(addr, lvl, PT_LEVEL_BITS) + #define PT_GUEST_DIRTY_MASK (1 << PT_GUEST_DIRTY_SHIFT) #define PT_GUEST_ACCESSED_MASK (1 << PT_GUEST_ACCESSED_SHIFT) From patchwork Mon Jun 13 22:57:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12880275 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4515C433EF for ; Mon, 13 Jun 2022 22:58:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343543AbiFMW60 (ORCPT ); Mon, 13 Jun 2022 18:58:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57584 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238745AbiFMW5k (ORCPT ); Mon, 13 Jun 2022 18:57:40 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 13C36F46 for ; Mon, 13 Jun 2022 15:57:39 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id u71-20020a63854a000000b004019c5cac3aso4009956pgd.19 for ; Mon, 13 Jun 2022 15:57:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=CH5DREzyS2tatBMXpdEXlWqxqYpXIqULDbAajGOOJKo=; b=RIJuXisytih2NpyVfyghweWip1f0Rg+Br76HuU1orwNbLKH1cNyhUeCfnj1YUhfv6g uyRN33fSoGMSawGtzvMYXn7mthUpreQxJJ1ogm/LPUypufIe9Hf8lhXjY1wHGQusChRz bmZPtYtWj0CgR8ynUC9hRcYuBiEWP7/WS8BjJYdRO2B89AbXZ3AWw31O+sSqZ6HCNZ3f OnxRRhM0kqF4xcUwUIgYY7AyNP2BbJR6jsEAR6AVSCJKi42zS4zj1ulaGcqOV/38JklQ UBKB8OK/TxWV/Lvw609IYCcv98GCsLyDHHaFzjmIR+XRkBG765wPAdhEs2OUBEpqxLf3 pl8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=CH5DREzyS2tatBMXpdEXlWqxqYpXIqULDbAajGOOJKo=; b=ctybWdFookHCXVTn/eC2FHHDOItFStMQbwtqv3E1+HaH+CW0xl42e6OZn0aXy6Db5D 7ItNCMlbiVzl5i/Bra0ddpQrf/f7HQGJyRDx1bwzZn3i6ARBrRnT2uaOZeAuajX83SJX lEy4KwCf6nzX6vytkUvBgzlp2SV0TXI6YH73rgVEaySGmq7jRPGuRpV60stCXvI2ilX+ 9KU4cGmW/a80bYQ348agZF0ohT8i/71qfSZixq+DFXRG0t0lITqUIgq13ksKbFbL+fwu VMGh7LCdvk3wQQ0oe6CjZ3u+CasnA1pjt0XKCaXgJdbH1CGjcPZyYCTwEkwVKtGlLuGY bNdw== X-Gm-Message-State: AJIora8JSlYbfrOG/5FmEG99XsT0dDl8AwH1XLUKhSi/JHpb1EAr/ifB I4X1TPZEr7NkzBwC2Fnduj9fMEZfUPM= X-Google-Smtp-Source: AGRyM1vzPhUIRhhvS3eRTgxLMPv9LYjTKe77b+/SMc6756A7kV3g3V5r9jXLT2nDC2hVGFG+vf4bXu/wm7M= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:90a:249:b0:1e0:a8a3:3c6c with SMTP id t9-20020a17090a024900b001e0a8a33c6cmr48916pje.0.1655161058035; Mon, 13 Jun 2022 15:57:38 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 13 Jun 2022 22:57:22 +0000 In-Reply-To: <20220613225723.2734132-1-seanjc@google.com> Message-Id: <20220613225723.2734132-8-seanjc@google.com> Mime-Version: 1.0 References: <20220613225723.2734132-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.476.g0c4daa206d-goog Subject: [PATCH 7/8] KVM: x86/mmu: Truncate paging32's PT_BASE_ADDR_MASK to 32 bits From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Truncate paging32's PT_BASE_ADDR_MASK to a pt_element_t, i.e. to 32 bits. Ignoring PSE huge pages, the mask is only used in conjunction with gPTEs, which are 32 bits, and so the address is limited to bits 31:12. PSE huge pages encoded PA bits 39:32 in PTE bits 20:13, i.e. need custom logic to handle their funky encoding regardless of PT_BASE_ADDR_MASK. Note, PT_LVL_OFFSET_MASK is someone confusing in that it computes the offset of the _gfn_, not of the gpa, i.e. not having bits 63:32 set in PT_BASE_ADDR_MASK is again correct. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 0bb2a6c97ebb..4087e58e2232 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -39,7 +39,7 @@ #define pt_element_t u32 #define guest_walker guest_walker32 #define FNAME(name) paging##32_##name - #define PT_BASE_ADDR_MASK PAGE_MASK + #define PT_BASE_ADDR_MASK ((pt_element_t)PAGE_MASK) #define PT_LEVEL_BITS PT32_LEVEL_BITS #define PT_MAX_FULL_LEVELS 2 #define PT_GUEST_DIRTY_SHIFT PT_DIRTY_SHIFT From patchwork Mon Jun 13 22:57:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12880274 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C54BCCA47E for ; Mon, 13 Jun 2022 22:58:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245698AbiFMW60 (ORCPT ); Mon, 13 Jun 2022 18:58:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57654 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241037AbiFMW5l (ORCPT ); Mon, 13 Jun 2022 18:57:41 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63D7521BD for ; Mon, 13 Jun 2022 15:57:40 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id e1-20020a17090301c100b00168d7df157cso2435164plh.3 for ; Mon, 13 Jun 2022 15:57:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=RHYcnFXTT7ac0ob6di8kcCxjr6TYxAsCityKr+RTKLI=; b=Ht+WkXrzNY4aCQJpUyHeoWPWNTBiTlUW3Lagepr4HbSdxEqbjS+ssk0l8Zamu5d1SR jjMxcznKpxbilodFMTbwrpKMcGsDf3SN15/AHVZrOIWYxZNkjMDYM5vYWcYW5iomnR8d ENCPPz0daRuFtW9s37PyTlx1y5/jAYunRaYufcaL3bKBWOyjrRQ6YKofuf8OAdaElQYX 6fyug2OPvXy5nnN8B/AfFA82ZAlHIVmrpOqVheuMEB1MgTqPNrSqJqoWpEvMLy5hICph MwWuJkEmgQzcxqCKlGF9srg9wvaxtk8Pa4iWY14QwGls9+deEB7KJO5apyO8sxQyixMG IxtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=RHYcnFXTT7ac0ob6di8kcCxjr6TYxAsCityKr+RTKLI=; b=UTFQkypofelgGf9lD3IQAVcwv34BbBPEiUzMpigfgQeuBmTCozDZptmO4pj+HEcH+K 4/YfNMvcyGNMuqyhHYNV3jNFeSUBsCKu//FSJa/8/CDgi14SEGn183baPV0bv0Hf4oLC oabhg8/zcgqSliQWo0Sh1nTpRT+h+hWP5ynLyrw3Kj1lGLIT7fAraj3tNLI1HKASqa3r sVoLZ8Ax/DpvCOmMFvFvL6a/WOMVeBcFEpN+xufSosM3iGAn7qJ+foR7iv8l5z8H04yC VK6HrsXj64gqOUo32elX20cjrE3PWL3scPyY0FkDpOTQJLH/R5FQK/domhtxFXG5fn4X CIaA== X-Gm-Message-State: AJIora9YOxi6m0vbK76ATrkBrn9Kbc7oJi+0XAQixjtweF3pfPP9B4gc klzsTnOnKEehVxeBsXk9YAOYaXds/rk= X-Google-Smtp-Source: AGRyM1usL/eeVClnixov12j9knTnRvi5pIAGqLnemUdbfR1Ufy3iDbbtQEYUfhjil6JScKSo4HiKr6s8g5Q= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:90a:66c1:b0:1e8:43ae:f7c0 with SMTP id z1-20020a17090a66c100b001e843aef7c0mr1054309pjl.245.1655161059904; Mon, 13 Jun 2022 15:57:39 -0700 (PDT) Reply-To: Sean Christopherson Date: Mon, 13 Jun 2022 22:57:23 +0000 In-Reply-To: <20220613225723.2734132-1-seanjc@google.com> Message-Id: <20220613225723.2734132-9-seanjc@google.com> Mime-Version: 1.0 References: <20220613225723.2734132-1-seanjc@google.com> X-Mailer: git-send-email 2.36.1.476.g0c4daa206d-goog Subject: [PATCH 8/8] KVM: x86/mmu: Use common logic for computing the 32/64-bit base PA mask From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use common logic for computing PT_BASE_ADDR_MASK for 32-bit, 64-bit, and EPT paging. Both PAGE_MASK and the new-common logic are supsersets of what is actually needed for 32-bit paging. PAGE_MASK sets bits 63:12 and the former GUEST_PT64_BASE_ADDR_MASK sets bits 51:12, so regardless of which value is used, the result will always be bits 31:12. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 2 -- arch/x86/kvm/mmu/paging.h | 9 --------- arch/x86/kvm/mmu/paging_tmpl.h | 4 +--- 3 files changed, 1 insertion(+), 14 deletions(-) delete mode 100644 arch/x86/kvm/mmu/paging.h diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index aedb8d871030..0f0c3ebfcf51 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -53,8 +53,6 @@ #include #include "trace.h" -#include "paging.h" - extern bool itlb_multihit_kvm_mitigation; int __read_mostly nx_huge_pages = -1; diff --git a/arch/x86/kvm/mmu/paging.h b/arch/x86/kvm/mmu/paging.h deleted file mode 100644 index 9de4976b2d46..000000000000 --- a/arch/x86/kvm/mmu/paging.h +++ /dev/null @@ -1,9 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -/* Shadow paging constants/helpers that don't need to be #undef'd. */ -#ifndef __KVM_X86_PAGING_H -#define __KVM_X86_PAGING_H - -#define GUEST_PT64_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1)) - -#endif /* __KVM_X86_PAGING_H */ - diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 4087e58e2232..1f0dbc31e5d4 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -25,7 +25,6 @@ #define pt_element_t u64 #define guest_walker guest_walker64 #define FNAME(name) paging##64_##name - #define PT_BASE_ADDR_MASK GUEST_PT64_BASE_ADDR_MASK #define PT_LEVEL_BITS 9 #define PT_GUEST_DIRTY_SHIFT PT_DIRTY_SHIFT #define PT_GUEST_ACCESSED_SHIFT PT_ACCESSED_SHIFT @@ -39,7 +38,6 @@ #define pt_element_t u32 #define guest_walker guest_walker32 #define FNAME(name) paging##32_##name - #define PT_BASE_ADDR_MASK ((pt_element_t)PAGE_MASK) #define PT_LEVEL_BITS PT32_LEVEL_BITS #define PT_MAX_FULL_LEVELS 2 #define PT_GUEST_DIRTY_SHIFT PT_DIRTY_SHIFT @@ -55,7 +53,6 @@ #define pt_element_t u64 #define guest_walker guest_walkerEPT #define FNAME(name) ept_##name - #define PT_BASE_ADDR_MASK GUEST_PT64_BASE_ADDR_MASK #define PT_LEVEL_BITS 9 #define PT_GUEST_DIRTY_SHIFT 9 #define PT_GUEST_ACCESSED_SHIFT 8 @@ -66,6 +63,7 @@ #endif /* Common logic, but per-type values. These also need to be undefined. */ +#define PT_BASE_ADDR_MASK ((pt_element_t)(((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1))) #define PT_LVL_ADDR_MASK(lvl) __PT_LVL_ADDR_MASK(PT_BASE_ADDR_MASK, lvl, PT_LEVEL_BITS) #define PT_LVL_OFFSET_MASK(lvl) __PT_LVL_OFFSET_MASK(PT_BASE_ADDR_MASK, lvl, PT_LEVEL_BITS) #define PT_INDEX(addr, lvl) __PT_INDEX(addr, lvl, PT_LEVEL_BITS)