From patchwork Wed Nov 24 12:20:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12636919 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC79CC43219 for ; Wed, 24 Nov 2021 13:42:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243740AbhKXNp1 (ORCPT ); Wed, 24 Nov 2021 08:45:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37436 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350772AbhKXNmr (ORCPT ); Wed, 24 Nov 2021 08:42:47 -0500 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D3E5BC0698C5; Wed, 24 Nov 2021 04:21:06 -0800 (PST) Received: by mail-pj1-x1029.google.com with SMTP id gx15-20020a17090b124f00b001a695f3734aso2560751pjb.0; Wed, 24 Nov 2021 04:21:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=AEdqjK4o16EGLSk6JO6HF5Tgz4hVCtbgs2lA50oCK6g=; b=EsylvIa6Gpn3m2ZXaJQlOQMttlasMarVx7+tOAP/h1b5Sg4ffnlTAQIl/gKLVjnDHL YMkTwAZuaD7UIDCPG9GsDA9ArdDM65KIZj8X4fcLj/UFemAzH/FIfDKwWA03hTlr8xII scM+GnDHOyp9uLmwaYg0G5bq9oQ72luE3g38/fLEbF9YshdSeM3G2Lo4ky+lS3QOPm4N fRgG/2Lu1aiZTAddbK0ktvLyl/OZ7XX30Tezyw1snZtjm3SbePkG4EXv7uGDRfEDXLfw aVSEZuG8wDoIe05ia9AjHdfbQS0OFDBevporn0iHpP2D8/1qFDfUQW7nB5fWr8N2fRP5 fDZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=AEdqjK4o16EGLSk6JO6HF5Tgz4hVCtbgs2lA50oCK6g=; b=5VAmcO0vtyLsow+o54Sv8wuRLkspl4Va8ZEuLuOPnPjBKy14blCawaGA/ei/0j+ZNf CDZczyMVhTZ2TW2My3tQqBCN364CAoqiE+WlxEuU4k3MwwAQkL5NlLTaABNmTe7tz9GA a4bhO9m1hzbmKJF9Iyi0K9El7lacHgtD3bbJ54veYtzB4PWddpzBfsJIR9PiBQV20tf1 0zsjzcuzAucF/H6THrEsy8nFLL45ktQih5g8+B7zreb2NqpbIHPHOJoOu4lftbddPWLX XrU2AXVr2rs77GhxclTm2Iv5t703tqOkuuxj6u+KzjxCfY7UAg4x8aEpbJWy59U2Yoo0 3khw== X-Gm-Message-State: AOAM532CwxO20c/qFsMCy//gDs7MRY1mxlOh/mZUSzjvgJDPlQf1Fuw4 2fRI4yLzFR1UoiNqjpzyaLrIn4zQlm0= X-Google-Smtp-Source: ABdhPJyI7qlzd1CFleVPx388vWONmPdD5I72tP5aJSGc5nveoHSEINgbfO7jMLHBwLTTwsNsh3x/tw== X-Received: by 2002:a17:90b:1c02:: with SMTP id oc2mr8202258pjb.65.1637756466275; Wed, 24 Nov 2021 04:21:06 -0800 (PST) Received: from localhost ([198.11.178.15]) by smtp.gmail.com with ESMTPSA id j13sm4285647pgm.35.2021.11.24.04.21.05 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 24 Nov 2021 04:21:06 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org, Lai Jiangshan , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Maxim Levitsky Subject: [PATCH 01/12] KVM: X86: Fix when shadow_root_level=5 && guest root_level<4 Date: Wed, 24 Nov 2021 20:20:43 +0800 Message-Id: <20211124122055.64424-2-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20211124122055.64424-1-jiangshanlai@gmail.com> References: <20211124122055.64424-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan If the is an L1 with nNPT in 32bit, the shadow walk starts with pae_root. Fixes: a717a780fc4e ("KVM: x86/mmu: Support shadowing NPT when 5-level paging is enabled in host) Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6948f2d696c3..701c67c55239 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2171,10 +2171,10 @@ static void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterato iterator->shadow_addr = root; iterator->level = vcpu->arch.mmu->shadow_root_level; - if (iterator->level == PT64_ROOT_4LEVEL && + if (iterator->level >= PT64_ROOT_4LEVEL && vcpu->arch.mmu->root_level < PT64_ROOT_4LEVEL && !vcpu->arch.mmu->direct_map) - --iterator->level; + iterator->level = PT32E_ROOT_LEVEL; if (iterator->level == PT32E_ROOT_LEVEL) { /* From patchwork Wed Nov 24 12:20:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12636909 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DF3BC433EF for ; Wed, 24 Nov 2021 13:41:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348058AbhKXNoZ (ORCPT ); Wed, 24 Nov 2021 08:44:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350776AbhKXNmr (ORCPT ); Wed, 24 Nov 2021 08:42:47 -0500 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99E2DC0698C8; Wed, 24 Nov 2021 04:21:13 -0800 (PST) Received: by mail-pl1-x636.google.com with SMTP id y8so1716809plg.1; Wed, 24 Nov 2021 04:21:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=STRdUV4xca8x3wJvu+ZZqiNFhAIwyd24gouZ9h3TcH8=; b=UUg8jNBEMM0qjffXSW3kQDX7HIDDJSE99LULYxIeLUuBHqLt4EbPJYCNnpxxvFtHfg LwD8czq5vlec8oFae24B7ARrQUkTa+V3Wx3vQGvwLXp5g0W/0Xet+Ox2oQwa5G/KJ4th LBmtPPJY3pwVoZfVGAKJ/6ln70ChgBqZUQf12FhsXc8bSm+B3y1yQ6UYsJogaULeI5Zz kmdpDgDpxu2uQuM4n/qGa3ohNN+3Z1MkdcexEJ6yoX9kOhVX5PSGV5CgSr/+iGhYFLkg 8bsGoBYVA/l/jZpWg0sJeScnslR7/acLHQFFjE2VwEpO3VS4Ha0+3FJ9GSF4JGedXU+m d3iA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=STRdUV4xca8x3wJvu+ZZqiNFhAIwyd24gouZ9h3TcH8=; b=FJHzaD7Nnqxl6E7UVDZR7aesiU45qSG5KwhPsvNdEGy3kwArgk4FNO81u3MpzGzg+S 1WrMKdujdqX3u3DDTqkCWEb7Y6E95ZgjQ/lkdcjFDNIYBTvciK6hP0XYem6r37BuCrGu FiDPwtNSezHrDwXxSvHBJ23HgWIRn++FhN+NtPFPsV/phujcbq7KpCqKMTvN1vjjFbzk xbcU25Dk0kCOii47rquIs4gDK4yce8pvJ8a0u6RP+gCbDnsUwRjsr3LmnTSt/0EZTBss Wvprk1lWufiCPlgRbt2iHks1b0fN3oN4p20gOzZOPoX+RY096pZM+p18NKYHb6VFmGJF fusA== X-Gm-Message-State: AOAM5313cNJmH6ep3vksu+YImjkpYQUqOut2/r298wv9o2v/WzDkBhah CpZ/KohFn8T86NgsQVu7MHSKznoENVM= X-Google-Smtp-Source: ABdhPJyx1ZaWqK2N97gNK+L56Rc48hSYyqU4Oxye8zkBTTFCMMsJ4rV2Ktvj3AQBC8f3oraH8bFlbQ== X-Received: by 2002:a17:90a:df8d:: with SMTP id p13mr14552851pjv.197.1637756472888; Wed, 24 Nov 2021 04:21:12 -0800 (PST) Received: from localhost ([198.11.178.15]) by smtp.gmail.com with ESMTPSA id oj11sm5094649pjb.46.2021.11.24.04.21.11 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 24 Nov 2021 04:21:12 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org, Lai Jiangshan , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [PATCH 02/12] KVM: X86: Add parameter struct kvm_mmu *mmu into mmu->gva_to_gpa() Date: Wed, 24 Nov 2021 20:20:44 +0800 Message-Id: <20211124122055.64424-3-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20211124122055.64424-1-jiangshanlai@gmail.com> References: <20211124122055.64424-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan The mmu->gva_to_gpa() has no "struct kvm_mmu *mmu", so an extra FNAME(gva_to_gpa_nested) is needed. Add the parameter can simplify the code. And it makes it explicit that the walk is upon vcpu->arch.walk_mmu for gva and vcpu->arch.mmu for L2 gpa in translate_nested_gpa() via the new parameter. Signed-off-by: Lai Jiangshan --- arch/x86/include/asm/kvm_host.h | 5 ++-- arch/x86/kvm/mmu/mmu.c | 24 +++++++------------ arch/x86/kvm/mmu/paging_tmpl.h | 41 ++++----------------------------- arch/x86/kvm/x86.c | 39 ++++++++++++++++++++----------- 4 files changed, 41 insertions(+), 68 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index eb6ef0209ee6..8419aff7136f 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -426,8 +426,9 @@ struct kvm_mmu { int (*page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); void (*inject_page_fault)(struct kvm_vcpu *vcpu, struct x86_exception *fault); - gpa_t (*gva_to_gpa)(struct kvm_vcpu *vcpu, gpa_t gva_or_gpa, - u32 access, struct x86_exception *exception); + gpa_t (*gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, + gpa_t gva_or_gpa, u32 access, + struct x86_exception *exception); gpa_t (*translate_gpa)(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access, struct x86_exception *exception); int (*sync_page)(struct kvm_vcpu *vcpu, diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 701c67c55239..3e00a54e23b6 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3728,21 +3728,13 @@ void kvm_mmu_sync_prev_roots(struct kvm_vcpu *vcpu) kvm_mmu_free_roots(vcpu, vcpu->arch.mmu, roots_to_free); } -static gpa_t nonpaging_gva_to_gpa(struct kvm_vcpu *vcpu, gpa_t vaddr, - u32 access, struct x86_exception *exception) +static gpa_t nonpaging_gva_to_gpa(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, + gpa_t vaddr, u32 access, + struct x86_exception *exception) { if (exception) exception->error_code = 0; - return vaddr; -} - -static gpa_t nonpaging_gva_to_gpa_nested(struct kvm_vcpu *vcpu, gpa_t vaddr, - u32 access, - struct x86_exception *exception) -{ - if (exception) - exception->error_code = 0; - return vcpu->arch.nested_mmu.translate_gpa(vcpu, vaddr, access, exception); + return mmu->translate_gpa(vcpu, vaddr, access, exception); } static bool mmio_info_in_cache(struct kvm_vcpu *vcpu, u64 addr, bool direct) @@ -4982,13 +4974,13 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu) * the gva_to_gpa functions between mmu and nested_mmu are swapped. */ if (!is_paging(vcpu)) - g_context->gva_to_gpa = nonpaging_gva_to_gpa_nested; + g_context->gva_to_gpa = nonpaging_gva_to_gpa; else if (is_long_mode(vcpu)) - g_context->gva_to_gpa = paging64_gva_to_gpa_nested; + g_context->gva_to_gpa = paging64_gva_to_gpa; else if (is_pae(vcpu)) - g_context->gva_to_gpa = paging64_gva_to_gpa_nested; + g_context->gva_to_gpa = paging64_gva_to_gpa; else - g_context->gva_to_gpa = paging32_gva_to_gpa_nested; + g_context->gva_to_gpa = paging32_gva_to_gpa; reset_guest_paging_metadata(vcpu, g_context); } diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index f87d36898c44..4e203fe703b0 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -547,16 +547,6 @@ static int FNAME(walk_addr)(struct guest_walker *walker, access); } -#if PTTYPE != PTTYPE_EPT -static int FNAME(walk_addr_nested)(struct guest_walker *walker, - struct kvm_vcpu *vcpu, gva_t addr, - u32 access) -{ - return FNAME(walk_addr_generic)(walker, vcpu, &vcpu->arch.nested_mmu, - addr, access); -} -#endif - static bool FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, u64 *spte, pt_element_t gpte, bool no_dirty_log) @@ -999,50 +989,29 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) } /* Note, @addr is a GPA when gva_to_gpa() translates an L2 GPA to an L1 GPA. */ -static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, gpa_t addr, u32 access, +static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, + gpa_t addr, u32 access, struct x86_exception *exception) { struct guest_walker walker; gpa_t gpa = UNMAPPED_GVA; int r; - r = FNAME(walk_addr)(&walker, vcpu, addr, access); - - if (r) { - gpa = gfn_to_gpa(walker.gfn); - gpa |= addr & ~PAGE_MASK; - } else if (exception) - *exception = walker.fault; - - return gpa; -} - -#if PTTYPE != PTTYPE_EPT -/* Note, gva_to_gpa_nested() is only used to translate L2 GVAs. */ -static gpa_t FNAME(gva_to_gpa_nested)(struct kvm_vcpu *vcpu, gpa_t vaddr, - u32 access, - struct x86_exception *exception) -{ - struct guest_walker walker; - gpa_t gpa = UNMAPPED_GVA; - int r; - #ifndef CONFIG_X86_64 /* A 64-bit GVA should be impossible on 32-bit KVM. */ - WARN_ON_ONCE(vaddr >> 32); + WARN_ON_ONCE((addr >> 32) && mmu == vcpu->arch.walk_mmu); #endif - r = FNAME(walk_addr_nested)(&walker, vcpu, vaddr, access); + r = FNAME(walk_addr_generic)(&walker, vcpu, mmu, addr, access); if (r) { gpa = gfn_to_gpa(walker.gfn); - gpa |= vaddr & ~PAGE_MASK; + gpa |= addr & ~PAGE_MASK; } else if (exception) *exception = walker.fault; return gpa; } -#endif /* * Using the cached information from sp->gfns is safe because: diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 04e8dabc187d..808786677b2b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6460,13 +6460,14 @@ void kvm_get_segment(struct kvm_vcpu *vcpu, gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access, struct x86_exception *exception) { + struct kvm_mmu *mmu = vcpu->arch.mmu; gpa_t t_gpa; BUG_ON(!mmu_is_nested(vcpu)); /* NPT walks are always user-walks */ access |= PFERR_USER_MASK; - t_gpa = vcpu->arch.mmu->gva_to_gpa(vcpu, gpa, access, exception); + t_gpa = mmu->gva_to_gpa(vcpu, mmu, gpa, access, exception); return t_gpa; } @@ -6474,25 +6475,31 @@ gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access, gpa_t kvm_mmu_gva_to_gpa_read(struct kvm_vcpu *vcpu, gva_t gva, struct x86_exception *exception) { + struct kvm_mmu *mmu = vcpu->arch.walk_mmu; + u32 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0; - return vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access, exception); + return mmu->gva_to_gpa(vcpu, mmu, gva, access, exception); } EXPORT_SYMBOL_GPL(kvm_mmu_gva_to_gpa_read); gpa_t kvm_mmu_gva_to_gpa_fetch(struct kvm_vcpu *vcpu, gva_t gva, struct x86_exception *exception) { + struct kvm_mmu *mmu = vcpu->arch.walk_mmu; + u32 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0; access |= PFERR_FETCH_MASK; - return vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access, exception); + return mmu->gva_to_gpa(vcpu, mmu, gva, access, exception); } gpa_t kvm_mmu_gva_to_gpa_write(struct kvm_vcpu *vcpu, gva_t gva, struct x86_exception *exception) { + struct kvm_mmu *mmu = vcpu->arch.walk_mmu; + u32 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0; access |= PFERR_WRITE_MASK; - return vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access, exception); + return mmu->gva_to_gpa(vcpu, mmu, gva, access, exception); } EXPORT_SYMBOL_GPL(kvm_mmu_gva_to_gpa_write); @@ -6500,19 +6507,21 @@ EXPORT_SYMBOL_GPL(kvm_mmu_gva_to_gpa_write); gpa_t kvm_mmu_gva_to_gpa_system(struct kvm_vcpu *vcpu, gva_t gva, struct x86_exception *exception) { - return vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, 0, exception); + struct kvm_mmu *mmu = vcpu->arch.walk_mmu; + + return mmu->gva_to_gpa(vcpu, mmu, gva, 0, exception); } static int kvm_read_guest_virt_helper(gva_t addr, void *val, unsigned int bytes, struct kvm_vcpu *vcpu, u32 access, struct x86_exception *exception) { + struct kvm_mmu *mmu = vcpu->arch.walk_mmu; void *data = val; int r = X86EMUL_CONTINUE; while (bytes) { - gpa_t gpa = vcpu->arch.walk_mmu->gva_to_gpa(vcpu, addr, access, - exception); + gpa_t gpa = mmu->gva_to_gpa(vcpu, mmu, addr, access, exception); unsigned offset = addr & (PAGE_SIZE-1); unsigned toread = min(bytes, (unsigned)PAGE_SIZE - offset); int ret; @@ -6540,13 +6549,14 @@ static int kvm_fetch_guest_virt(struct x86_emulate_ctxt *ctxt, struct x86_exception *exception) { struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt); + struct kvm_mmu *mmu = vcpu->arch.walk_mmu; u32 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0; unsigned offset; int ret; /* Inline kvm_read_guest_virt_helper for speed. */ - gpa_t gpa = vcpu->arch.walk_mmu->gva_to_gpa(vcpu, addr, access|PFERR_FETCH_MASK, - exception); + gpa_t gpa = mmu->gva_to_gpa(vcpu, mmu, addr, access|PFERR_FETCH_MASK, + exception); if (unlikely(gpa == UNMAPPED_GVA)) return X86EMUL_PROPAGATE_FAULT; @@ -6605,13 +6615,12 @@ static int kvm_write_guest_virt_helper(gva_t addr, void *val, unsigned int bytes struct kvm_vcpu *vcpu, u32 access, struct x86_exception *exception) { + struct kvm_mmu *mmu = vcpu->arch.walk_mmu; void *data = val; int r = X86EMUL_CONTINUE; while (bytes) { - gpa_t gpa = vcpu->arch.walk_mmu->gva_to_gpa(vcpu, addr, - access, - exception); + gpa_t gpa = mmu->gva_to_gpa(vcpu, mmu, addr, access, exception); unsigned offset = addr & (PAGE_SIZE-1); unsigned towrite = min(bytes, (unsigned)PAGE_SIZE - offset); int ret; @@ -6698,6 +6707,7 @@ static int vcpu_mmio_gva_to_gpa(struct kvm_vcpu *vcpu, unsigned long gva, gpa_t *gpa, struct x86_exception *exception, bool write) { + struct kvm_mmu *mmu = vcpu->arch.walk_mmu; u32 access = ((static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0) | (write ? PFERR_WRITE_MASK : 0); @@ -6715,7 +6725,7 @@ static int vcpu_mmio_gva_to_gpa(struct kvm_vcpu *vcpu, unsigned long gva, return 1; } - *gpa = vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access, exception); + *gpa = mmu->gva_to_gpa(vcpu, mmu, gva, access, exception); if (*gpa == UNMAPPED_GVA) return -1; @@ -12268,12 +12278,13 @@ EXPORT_SYMBOL_GPL(kvm_spec_ctrl_test_value); void kvm_fixup_and_inject_pf_error(struct kvm_vcpu *vcpu, gva_t gva, u16 error_code) { + struct kvm_mmu *mmu = vcpu->arch.walk_mmu; struct x86_exception fault; u32 access = error_code & (PFERR_WRITE_MASK | PFERR_FETCH_MASK | PFERR_USER_MASK); if (!(error_code & PFERR_PRESENT_MASK) || - vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access, &fault) != UNMAPPED_GVA) { + mmu->gva_to_gpa(vcpu, mmu, gva, access, &fault) != UNMAPPED_GVA) { /* * If vcpu->arch.walk_mmu->gva_to_gpa succeeded, the page * tables probably do not match the TLB. Just proceed From patchwork Wed Nov 24 12:20:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12636921 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C335C433EF for ; Wed, 24 Nov 2021 13:42:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351302AbhKXNp2 (ORCPT ); Wed, 24 Nov 2021 08:45:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350796AbhKXNms (ORCPT ); Wed, 24 Nov 2021 08:42:48 -0500 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 77C16C0698C9; Wed, 24 Nov 2021 04:21:19 -0800 (PST) Received: by mail-pj1-x102a.google.com with SMTP id gt5so2412290pjb.1; Wed, 24 Nov 2021 04:21:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Zwl/Rbup822GR7YknsUghbLNnM1SIeNYe9VDcXnP+Sk=; b=aQXJ83tqf+oMkt0pJY45V/H0zNbd2W+IW38lBMy3FCjlQCVNagPoG2Ng2Lt/qKaXFD B9alZHgES9KKYJ/vKqyet7KBsiSj2Jzf4WivUpf/sLa5eUz8A5p4lTF4l4JkbIIVtyl4 iLSl6pQahW0SxTl3p77pqORO7Th+gfM9gzJLeUIBEYGD5G9m0U9+iXDSlzoiwI+BUD3c ej6qag0Sw5FT0/Fm5TDNdPtV/GJGLVXlg8glJ1sZYLyhMjzJYdu+ewok/4Eo6nwSOAN3 5LqMFA7CyWaH11cGKnBkuBIAuyBUKlzTBDz/q12bg2144YBUdukDNPrQ4ZyLNonBQX54 cbPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Zwl/Rbup822GR7YknsUghbLNnM1SIeNYe9VDcXnP+Sk=; b=nR9qhA/rVToZ4rHvrc0JTo6ZI6O71Tzum+/7Mng9EZ2xSE05hNIWnhQwulWTXNhkys cjciI4E6T3X1emDZOz128TM1LIXtAlBvQuPaFS7LjqTYa4zGKHMM/Pbt36y8FTUJ/K0W DHx2RRwjFZEM2U+DghPv5Gmh7iJEtAUA+HYfKAEzrp6o8DKr1ZmkBENgnJevv2kCAw5Y u/TlgSEY7np/7w49kunZ3ZM4eyiHeszBxWcyNMz4fosTHQONWw36E/beb3OsunQVgtQ/ nPS0PsmkgVDjOU/6nyo+7JQPagURd18r71L/lxlMPylc+j58RTskQLLOxZR5WEP9kf9i 79ug== X-Gm-Message-State: AOAM5315BFOtYhqNI1DkAf+AZkJgSfsY4fypjVAa4UGeCjM7tmpEJa+2 NoRzCQIrIetDySX86SeKM+RLu1p6UwQ= X-Google-Smtp-Source: ABdhPJylKauCn89iUuD4ycSsbzeFYBxpmBnW1dVi3XnyQM3sB6quaWGlySlwLEHIwj8Ih1uUWmcQRw== X-Received: by 2002:a17:90a:9bc1:: with SMTP id b1mr14808184pjw.49.1637756478836; Wed, 24 Nov 2021 04:21:18 -0800 (PST) Received: from localhost ([198.11.178.15]) by smtp.gmail.com with ESMTPSA id mg17sm4594663pjb.17.2021.11.24.04.21.17 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 24 Nov 2021 04:21:18 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org, Lai Jiangshan , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [PATCH 03/12] KVM: X86: Remove mmu->translate_gpa Date: Wed, 24 Nov 2021 20:20:45 +0800 Message-Id: <20211124122055.64424-4-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20211124122055.64424-1-jiangshanlai@gmail.com> References: <20211124122055.64424-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Reduce an indirect function call (retpoline) and some intialization code. Signed-off-by: Lai Jiangshan --- arch/x86/include/asm/kvm_host.h | 4 ---- arch/x86/kvm/mmu.h | 13 +++++++++++++ arch/x86/kvm/mmu/mmu.c | 11 +---------- arch/x86/kvm/mmu/paging_tmpl.h | 7 +++---- arch/x86/kvm/x86.c | 4 ++-- 5 files changed, 19 insertions(+), 20 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 8419aff7136f..dd16fdedc0e8 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -429,8 +429,6 @@ struct kvm_mmu { gpa_t (*gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, gpa_t gva_or_gpa, u32 access, struct x86_exception *exception); - gpa_t (*translate_gpa)(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access, - struct x86_exception *exception); int (*sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp); void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa); @@ -1764,8 +1762,6 @@ int kvm_mmu_unprotect_page(struct kvm *kvm, gfn_t gfn); void kvm_mmu_free_roots(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, ulong roots_to_free); void kvm_mmu_free_guest_mode_roots(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu); -gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access, - struct x86_exception *exception); gpa_t kvm_mmu_gva_to_gpa_read(struct kvm_vcpu *vcpu, gva_t gva, struct x86_exception *exception); gpa_t kvm_mmu_gva_to_gpa_fetch(struct kvm_vcpu *vcpu, gva_t gva, diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 9ae6168d381e..97e13c2988b3 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -351,4 +351,17 @@ static inline void kvm_update_page_stats(struct kvm *kvm, int level, int count) { atomic64_add(count, &kvm->stat.pages[level - 1]); } + +gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access, + struct x86_exception *exception); + +static inline gpa_t kvm_translate_gpa(struct kvm_vcpu *vcpu, + struct kvm_mmu *mmu, + gpa_t gpa, u32 access, + struct x86_exception *exception) +{ + if (mmu != &vcpu->arch.nested_mmu) + return gpa; + return translate_nested_gpa(vcpu, gpa, access, exception); +} #endif diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3e00a54e23b6..f3aa91db4a7e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -335,12 +335,6 @@ static bool check_mmio_spte(struct kvm_vcpu *vcpu, u64 spte) return likely(kvm_gen == spte_gen); } -static gpa_t translate_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access, - struct x86_exception *exception) -{ - return gpa; -} - static int is_cpuid_PSE36(void) { return 1; @@ -3734,7 +3728,7 @@ static gpa_t nonpaging_gva_to_gpa(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, { if (exception) exception->error_code = 0; - return mmu->translate_gpa(vcpu, vaddr, access, exception); + return kvm_translate_gpa(vcpu, mmu, vaddr, access, exception); } static bool mmio_info_in_cache(struct kvm_vcpu *vcpu, u64 addr, bool direct) @@ -5487,7 +5481,6 @@ static int __kvm_mmu_create(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu) mmu->root_hpa = INVALID_PAGE; mmu->root_pgd = 0; - mmu->translate_gpa = translate_gpa; for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) mmu->prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID; @@ -5549,8 +5542,6 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu) vcpu->arch.mmu = &vcpu->arch.root_mmu; vcpu->arch.walk_mmu = &vcpu->arch.root_mmu; - vcpu->arch.nested_mmu.translate_gpa = translate_nested_gpa; - ret = __kvm_mmu_create(vcpu, &vcpu->arch.guest_mmu); if (ret) return ret; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 4e203fe703b0..5c78300fc7d9 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -403,9 +403,8 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, walker->table_gfn[walker->level - 1] = table_gfn; walker->pte_gpa[walker->level - 1] = pte_gpa; - real_gpa = mmu->translate_gpa(vcpu, gfn_to_gpa(table_gfn), - nested_access, - &walker->fault); + real_gpa = kvm_translate_gpa(vcpu, mmu, gfn_to_gpa(table_gfn), + nested_access, &walker->fault); /* * FIXME: This can happen if emulation (for of an INS/OUTS @@ -467,7 +466,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, if (PTTYPE == 32 && walker->level > PG_LEVEL_4K && is_cpuid_PSE36()) gfn += pse36_gfn_delta(pte); - real_gpa = mmu->translate_gpa(vcpu, gfn_to_gpa(gfn), access, &walker->fault); + real_gpa = kvm_translate_gpa(vcpu, mmu, gfn_to_gpa(gfn), access, &walker->fault); if (real_gpa == UNMAPPED_GVA) return 0; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 808786677b2b..25e278ba4666 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -810,8 +810,8 @@ int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3) * If the MMU is nested, CR3 holds an L2 GPA and needs to be translated * to an L1 GPA. */ - real_gpa = mmu->translate_gpa(vcpu, gfn_to_gpa(pdpt_gfn), - PFERR_USER_MASK | PFERR_WRITE_MASK, NULL); + real_gpa = kvm_translate_gpa(vcpu, mmu, gfn_to_gpa(pdpt_gfn), + PFERR_USER_MASK | PFERR_WRITE_MASK, NULL); if (real_gpa == UNMAPPED_GVA) return 0; From patchwork Wed Nov 24 12:20:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12636911 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F02F9C433F5 for ; Wed, 24 Nov 2021 13:41:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348547AbhKXNo1 (ORCPT ); Wed, 24 Nov 2021 08:44:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350380AbhKXNmt (ORCPT ); Wed, 24 Nov 2021 08:42:49 -0500 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EDF68C0698CA; Wed, 24 Nov 2021 04:21:24 -0800 (PST) Received: by mail-pj1-x1034.google.com with SMTP id j6-20020a17090a588600b001a78a5ce46aso4945564pji.0; Wed, 24 Nov 2021 04:21:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=i9V4Yvz1w6hXJCTJPzmC+m66wFQ/GQ8zbRs/udK4i6o=; b=IpjGHsoraNKAijhHmYeTjdAq5Jpm/SaJ2j2xp3JzpAW0/POUu4Stlj21Ylqg1k/tsl SY/9wuPuYUnUzE3B5gOuOfxe/8bx6ktxKGiSK65zQFlmfhcdn6o3UTHKTkeVIcb3FNcL g8L5k7F/I7y3Mfwe8RyvARz9o9kDSRiDq53aqsiAJ1QI0pIjEOYQVUGEcuI5WbwwcFvA Lj7eF8cKKQeKnJ/rF31td1MyuZ95UTPcdSx1kqIQDd2BZNYjTwDDFAWK1qRlfKL+hXZr hhhYiTv3FcUrFWbcNP3p19BtoVJNS+wLzRNGpDdFj2YuWBgWTw/xuo1nmVni8Ww6t5on RIIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=i9V4Yvz1w6hXJCTJPzmC+m66wFQ/GQ8zbRs/udK4i6o=; b=PPTbsA8tVSqKrhDa1Y1v6oeweTzdRTEixQB8FYBh4DQ20ugPfKrCYgH4y0APacHy65 aXn/6N515dFXYX67cmWlARJtKDSVxYRXXs8ZVLcwi3gbvcfdPGB+7vSf254jUHAzh1+5 pk8QzUzGDI2B8bBSvE/jkS6CfpaCLldd185pANVXOuM80TDVCYPPMXKX5o9Rg/KNbJ4N lXmEKbFd9f3CZaPZVcwKoXa8WALtyTtcmX4xHrli+29dShE+8IW9oM/k7SNA0HxbBBvc CyM/zGKfYBEN0FilJlrgh0XWxlr7+8aFIZ8sCGoazNXBx/YtZya2RTIJ8waKSXcPku3K yJCg== X-Gm-Message-State: AOAM533qgVbimgvm9DcV2f5i3wqPgiFCm7qCTooCrfpFHTQnXtjr27EB pcFEREt1dElvBVqtnw477E5559D+YzI= X-Google-Smtp-Source: ABdhPJyoaEFIkIIQq/1l/gdL81QGlS8vFVOYQfjxMxyf2iJ1jYH/jRjqjZQHyhcs+8b3tiwcgHEF2Q== X-Received: by 2002:a17:90b:4d86:: with SMTP id oj6mr8263437pjb.101.1637756484383; Wed, 24 Nov 2021 04:21:24 -0800 (PST) Received: from localhost ([198.11.178.15]) by smtp.gmail.com with ESMTPSA id np1sm4777841pjb.22.2021.11.24.04.21.23 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 24 Nov 2021 04:21:24 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org, Lai Jiangshan , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [PATCH 04/12] KVM: X86: Use vcpu->arch.walk_mmu for kvm_mmu_invlpg() Date: Wed, 24 Nov 2021 20:20:46 +0800 Message-Id: <20211124122055.64424-5-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20211124122055.64424-1-jiangshanlai@gmail.com> References: <20211124122055.64424-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan In design, the guest virtuall address is only make sense for vcpu->arch.walk_mmu which is often the same as vcpu->arch.mmu. But they are different semantic by design, so vcpu->arch.walk_mmu should be used instead like other call site of kvm_mmu_invalidate_gva(). In theory, if L2's invlpg is being emulated by L0 (in practice, it hardly happen) when nTDP is used, ->tlb_flush_gva() should be called to flush hwTLB, but using vcpu->arch.mmu causes it incorrectly skept. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f3aa91db4a7e..72ce0d78435e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5353,7 +5353,7 @@ void kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva) { - kvm_mmu_invalidate_gva(vcpu, vcpu->arch.mmu, gva, INVALID_PAGE); + kvm_mmu_invalidate_gva(vcpu, vcpu->arch.walk_mmu, gva, INVALID_PAGE); ++vcpu->stat.invlpg; } EXPORT_SYMBOL_GPL(kvm_mmu_invlpg); From patchwork Wed Nov 24 12:20:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12636923 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 899E2C4332F for ; Wed, 24 Nov 2021 13:42:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351317AbhKXNpb (ORCPT ); Wed, 24 Nov 2021 08:45:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37022 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350956AbhKXNmy (ORCPT ); Wed, 24 Nov 2021 08:42:54 -0500 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66993C0698CB; Wed, 24 Nov 2021 04:21:31 -0800 (PST) Received: by mail-pj1-x1029.google.com with SMTP id x7so2436706pjn.0; Wed, 24 Nov 2021 04:21:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Nx8naYJh1ipVWYX82CVY7+oRo59tsyYL0AYDD5FQuD8=; b=bJvWtl9spXUR44uhEAcvEFd4KD/+FY8lHL2R21lS2lCAN0Ifj2DtUkvn3/GLjn9twJ lNd1tyfLCxwkkipp3plp2JJx4cJH6mQVOdYxnZxQD0+/nzxxYgcnjsm36MN/hFIpXQRt skA7eVII80U+5HYRMcLjYA1P3iXGxMfohmW1D+iYbsKwHK/L0WYfF0+h2FMNPQcLzSlW BXIIfrXwWgWBF8hvRA2srQ1VEJhXWWcKTwA57R70a6uBOcAwXmvZN3ptQRF44FUtW4/p T4urWKAXtB9Qb674tl3tDdWs4l2KUi6nnL0/eYM+8OYLki+OOmn1zSxToiQbmWorJiMQ khdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Nx8naYJh1ipVWYX82CVY7+oRo59tsyYL0AYDD5FQuD8=; b=iBys+bRbK2+3nNIa7CFujnzy64TGlAz0wMdjYY5t+DB22FADqNiAfLJSwI/7RzL/QB iq4Sx2Duy+3sDTvpnvkKT+AiO+w4RO7F6zHDSh0sPzzLU7e7xBB5s9BxzHPMu0q1amdP jXlBLdnJOYxaciOdEWrtDvf0FpiQhUYJ9tCm9hdsl6ix/AMNiSsWq6raJ6VdlWXHiiHD tZlSBxGrM0ZuZ2nfuI80mU+gHlM4Zi/G/FMCpcpfW2fWMrvyIjA0IKbmVoFe/FQoRkiK dA5AKZB8g4N58cBUlvmPp+dCtlvMbt4ynDZ6vbbD9BPkdZJk2cFM+bpQEUqW1f+Sj8Mk Aefw== X-Gm-Message-State: AOAM531+/WslwhOJoEYv/qe/QGvjv5KfLzT67mWwfE90vPjTo0lHMZiT ThRz+T0PAGfI26sShzdMN8ge2jLtVRg= X-Google-Smtp-Source: ABdhPJyhpU94osS6KjkmJ+iOWbepJPJFtWdebfgcKrsgGzYkqdS5e+HkUwMJ11ezX3RQpwOQR2JyBA== X-Received: by 2002:a17:902:728e:b0:143:a388:868b with SMTP id d14-20020a170902728e00b00143a388868bmr17969637pll.33.1637756490796; Wed, 24 Nov 2021 04:21:30 -0800 (PST) Received: from localhost ([198.11.178.15]) by smtp.gmail.com with ESMTPSA id mq14sm5267905pjb.54.2021.11.24.04.21.29 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 24 Nov 2021 04:21:30 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org, Lai Jiangshan , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [PATCH 05/12] KVM: X86: Change the type of a parameter of kvm_mmu_invalidate_gva() and mmu->invlpg() to gpa_t Date: Wed, 24 Nov 2021 20:20:47 +0800 Message-Id: <20211124122055.64424-6-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20211124122055.64424-1-jiangshanlai@gmail.com> References: <20211124122055.64424-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan When kvm_mmu_invalidate_gva() is called for nested TDP, the @gva is L2 GPA, so the type of the parameter should be gpa_t like mmu->gva_to_gpa(). The parameter name is also changed to gva_or_l2pa for self documentation. Signed-off-by: Lai Jiangshan --- arch/x86/include/asm/kvm_host.h | 6 +++--- arch/x86/kvm/mmu/mmu.c | 14 +++++++------- arch/x86/kvm/mmu/paging_tmpl.h | 7 ++++--- 3 files changed, 14 insertions(+), 13 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index dd16fdedc0e8..e382596baa1d 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -427,11 +427,11 @@ struct kvm_mmu { void (*inject_page_fault)(struct kvm_vcpu *vcpu, struct x86_exception *fault); gpa_t (*gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, - gpa_t gva_or_gpa, u32 access, + gpa_t gva_or_l2pa, u32 access, struct x86_exception *exception); int (*sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp); - void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa); + void (*invlpg)(struct kvm_vcpu *vcpu, gpa_t gva_or_l2pa, hpa_t root_hpa); hpa_t root_hpa; gpa_t root_pgd; union kvm_mmu_role mmu_role; @@ -1785,7 +1785,7 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 error_code, void *insn, int insn_len); void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva); void kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, - gva_t gva, hpa_t root_hpa); + gpa_t gva_or_l2pa, hpa_t root_hpa); void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid); void kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t new_pgd); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 72ce0d78435e..d3bad4ae72fb 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5313,24 +5313,24 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 error_code, EXPORT_SYMBOL_GPL(kvm_mmu_page_fault); void kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, - gva_t gva, hpa_t root_hpa) + gpa_t gva_or_l2pa, hpa_t root_hpa) { int i; - /* It's actually a GPA for vcpu->arch.guest_mmu. */ + /* It's actually a L2 GPA for vcpu->arch.guest_mmu. */ if (mmu != &vcpu->arch.guest_mmu) { /* INVLPG on a non-canonical address is a NOP according to the SDM. */ - if (is_noncanonical_address(gva, vcpu)) + if (is_noncanonical_address(gva_or_l2pa, vcpu)) return; - static_call(kvm_x86_tlb_flush_gva)(vcpu, gva); + static_call(kvm_x86_tlb_flush_gva)(vcpu, gva_or_l2pa); } if (!mmu->invlpg) return; if (root_hpa == INVALID_PAGE) { - mmu->invlpg(vcpu, gva, mmu->root_hpa); + mmu->invlpg(vcpu, gva_or_l2pa, mmu->root_hpa); /* * INVLPG is required to invalidate any global mappings for the VA, @@ -5345,9 +5345,9 @@ void kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, */ for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) if (VALID_PAGE(mmu->prev_roots[i].hpa)) - mmu->invlpg(vcpu, gva, mmu->prev_roots[i].hpa); + mmu->invlpg(vcpu, gva_or_l2pa, mmu->prev_roots[i].hpa); } else { - mmu->invlpg(vcpu, gva, root_hpa); + mmu->invlpg(vcpu, gva_or_l2pa, root_hpa); } } diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 5c78300fc7d9..7b86209e73f9 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -928,7 +928,8 @@ static gpa_t FNAME(get_level1_sp_gpa)(struct kvm_mmu_page *sp) return gfn_to_gpa(sp->gfn) + offset * sizeof(pt_element_t); } -static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) +/* Note, @gva_or_l2pa is a GPA when invlpg() invalidates an L2 GPA. */ +static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gpa_t gva_or_l2pa, hpa_t root_hpa) { struct kvm_shadow_walk_iterator iterator; struct kvm_mmu_page *sp; @@ -936,7 +937,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) int level; u64 *sptep; - vcpu_clear_mmio_info(vcpu, gva); + vcpu_clear_mmio_info(vcpu, gva_or_l2pa); /* * No need to check return value here, rmap_can_add() can @@ -950,7 +951,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) } write_lock(&vcpu->kvm->mmu_lock); - for_each_shadow_entry_using_root(vcpu, root_hpa, gva, iterator) { + for_each_shadow_entry_using_root(vcpu, root_hpa, gva_or_l2pa, iterator) { level = iterator.level; sptep = iterator.sptep; From patchwork Wed Nov 24 12:20:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12636925 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6165BC433EF for ; Wed, 24 Nov 2021 13:42:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348799AbhKXNpe (ORCPT ); Wed, 24 Nov 2021 08:45:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350962AbhKXNmz (ORCPT ); Wed, 24 Nov 2021 08:42:55 -0500 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 15370C0698CC; Wed, 24 Nov 2021 04:21:37 -0800 (PST) Received: by mail-pj1-x1029.google.com with SMTP id np6-20020a17090b4c4600b001a90b011e06so2495848pjb.5; Wed, 24 Nov 2021 04:21:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jluOHmy1K0iC3PvtHhC0yZeCwQHIXzU2oKrxosl4sEk=; b=Ehuer+VNbQANfH9GJKYaaMMp62cPMhRGtFZbaodua0jWEUSn5bTLwDCOxCDa6oYWDL JGimXWSFReaOHrQBD+IMVDDfovxHV2Y74YrH5GA9QS1I1kRZmkkvqI+cWPQjRgvWv5MA SDL1PLgNPoyR4JDIIVA01G3T2I4SNrMndgVxo8mgBMaQzX+8yIhFzuIjJLtwluS1CI5b 58XCa1Ji6cI8atOvSfZtwxxvgL1t0ui5g6mqNZAJkAyeRscDBTfk44yL1mP4qDBS450B +kAosxhlIL+F4g6NoN7P+jFxPMstjC2TJffLvQxXzOoCPfw5ssfOmgmYvs1wy3woLDce yhYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jluOHmy1K0iC3PvtHhC0yZeCwQHIXzU2oKrxosl4sEk=; b=ekQbxZQPbfNx2E6+4lNfzcU/fXatJynPlHr0sN826za4CYNA7BEFeB2HVuxETjaolG pwQg3ufnb/8mmhOtPA2Gspo3QU6JgPsvv2FIPcKuBhsjFgtiwErscGqZ0HDzuczjwFpb hPFvWNHcvUvjfjD6sGn2CQvNtiZyQ+S/4eks4lz+e9l6nQI0IurTMJkf5BTja1HdX3Gy VaA8jibtaKVhGasXaHFKd4sdBPCiU6qQcrzaP/RHgI3v5+C6LrxmWETiCyRKQjDt2lMK 247mmiLzJxvahdkljZOs2tKc5MTjvrgpGxt67rBfPPy7BpAXy4FbRg3XQ/0mhRNjEhhL 88Fw== X-Gm-Message-State: AOAM533TR1JOc20h31zrm/iGsAc2w5qWY+fW10yi80fh39tuBDPQlhSH 9fL3WpOY5OAlw0LsFbE3SmlIf2DggMw= X-Google-Smtp-Source: ABdhPJz2FfrwYzAspbTT1YylpV6PD/eHW4/ujUarERkVUiOJGZr3VnP+m12gwawz7gnUjusEjq+w9A== X-Received: by 2002:a17:902:e890:b0:142:f3:7bf7 with SMTP id w16-20020a170902e89000b0014200f37bf7mr17733269plg.87.1637756496457; Wed, 24 Nov 2021 04:21:36 -0800 (PST) Received: from localhost ([198.11.178.15]) by smtp.gmail.com with ESMTPSA id e13sm11906461pgb.8.2021.11.24.04.21.35 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 24 Nov 2021 04:21:36 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org, Lai Jiangshan , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [PATCH 06/12] KVM: X86: Add huge_page_level to __reset_rsvds_bits_mask_ept() Date: Wed, 24 Nov 2021 20:20:48 +0800 Message-Id: <20211124122055.64424-7-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20211124122055.64424-1-jiangshanlai@gmail.com> References: <20211124122055.64424-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Bit 7 on pte depends on the level of supported large page. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 29 +++++++++++++++++++---------- 1 file changed, 19 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index d3bad4ae72fb..8a371d6c2291 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4339,22 +4339,28 @@ static void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, static void __reset_rsvds_bits_mask_ept(struct rsvd_bits_validate *rsvd_check, - u64 pa_bits_rsvd, bool execonly) + u64 pa_bits_rsvd, bool execonly, int huge_page_level) { u64 high_bits_rsvd = pa_bits_rsvd & rsvd_bits(0, 51); + u64 large_1g_rsvd = 0, large_2m_rsvd = 0; u64 bad_mt_xwr; + if (huge_page_level < PG_LEVEL_1G) + large_1g_rsvd = rsvd_bits(7, 7); + if (huge_page_level < PG_LEVEL_2M) + large_2m_rsvd = rsvd_bits(7, 7); + rsvd_check->rsvd_bits_mask[0][4] = high_bits_rsvd | rsvd_bits(3, 7); rsvd_check->rsvd_bits_mask[0][3] = high_bits_rsvd | rsvd_bits(3, 7); - rsvd_check->rsvd_bits_mask[0][2] = high_bits_rsvd | rsvd_bits(3, 6); - rsvd_check->rsvd_bits_mask[0][1] = high_bits_rsvd | rsvd_bits(3, 6); + rsvd_check->rsvd_bits_mask[0][2] = high_bits_rsvd | rsvd_bits(3, 6) | large_1g_rsvd; + rsvd_check->rsvd_bits_mask[0][1] = high_bits_rsvd | rsvd_bits(3, 6) | large_2m_rsvd; rsvd_check->rsvd_bits_mask[0][0] = high_bits_rsvd; /* large page */ rsvd_check->rsvd_bits_mask[1][4] = rsvd_check->rsvd_bits_mask[0][4]; rsvd_check->rsvd_bits_mask[1][3] = rsvd_check->rsvd_bits_mask[0][3]; - rsvd_check->rsvd_bits_mask[1][2] = high_bits_rsvd | rsvd_bits(12, 29); - rsvd_check->rsvd_bits_mask[1][1] = high_bits_rsvd | rsvd_bits(12, 20); + rsvd_check->rsvd_bits_mask[1][2] = high_bits_rsvd | rsvd_bits(12, 29) | large_1g_rsvd; + rsvd_check->rsvd_bits_mask[1][1] = high_bits_rsvd | rsvd_bits(12, 20) | large_2m_rsvd; rsvd_check->rsvd_bits_mask[1][0] = rsvd_check->rsvd_bits_mask[0][0]; bad_mt_xwr = 0xFFull << (2 * 8); /* bits 3..5 must not be 2 */ @@ -4370,10 +4376,11 @@ __reset_rsvds_bits_mask_ept(struct rsvd_bits_validate *rsvd_check, } static void reset_rsvds_bits_mask_ept(struct kvm_vcpu *vcpu, - struct kvm_mmu *context, bool execonly) + struct kvm_mmu *context, bool execonly, int huge_page_level) { __reset_rsvds_bits_mask_ept(&context->guest_rsvd_check, - vcpu->arch.reserved_gpa_bits, execonly); + vcpu->arch.reserved_gpa_bits, execonly, + huge_page_level); } static inline u64 reserved_hpa_bits(void) @@ -4449,7 +4456,8 @@ reset_tdp_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, false, true); else __reset_rsvds_bits_mask_ept(shadow_zero_check, - reserved_hpa_bits(), false); + reserved_hpa_bits(), false, + max_huge_page_level); if (!shadow_me_mask) return; @@ -4469,7 +4477,8 @@ reset_ept_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context, bool execonly) { __reset_rsvds_bits_mask_ept(&context->shadow_zero_check, - reserved_hpa_bits(), execonly); + reserved_hpa_bits(), execonly, + max_huge_page_level); } #define BYTE_MASK(access) \ @@ -4904,7 +4913,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, update_permission_bitmask(context, true); update_pkru_bitmask(context); - reset_rsvds_bits_mask_ept(vcpu, context, execonly); + reset_rsvds_bits_mask_ept(vcpu, context, execonly, max_huge_page_level); reset_ept_shadow_zero_bits_mask(vcpu, context, execonly); } EXPORT_SYMBOL_GPL(kvm_init_shadow_ept_mmu); From patchwork Wed Nov 24 12:20:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12636913 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71859C433F5 for ; Wed, 24 Nov 2021 13:41:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348630AbhKXNo3 (ORCPT ); Wed, 24 Nov 2021 08:44:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37028 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350975AbhKXNmz (ORCPT ); Wed, 24 Nov 2021 08:42:55 -0500 Received: from mail-pg1-x52c.google.com (mail-pg1-x52c.google.com [IPv6:2607:f8b0:4864:20::52c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B984C0698CD; Wed, 24 Nov 2021 04:21:43 -0800 (PST) Received: by mail-pg1-x52c.google.com with SMTP id m15so1961654pgu.11; Wed, 24 Nov 2021 04:21:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lYEOAK74S7RKzE6p9TGoUnrSZ59dGlU6PiDgEqVSldA=; b=IUee1jipE77HMbvfvG0aZvIqkBRfR/vDG2myTlV43W049ZAHi5ev12tFvM4WCqypi9 E4rDo1+zLIcVO53hkBGdhUx77Nv3gDaQh/ASREMyCVD9Oc1zGATEGAVAo007aiuVlWOe 5MMg3KHyZXeOu9kE2oHGRpQhyLMQUCNNZP87q7aCSmrSDWwCN/IBko9TuXrFdlr5vcUL mm0M+uFgV5RPjhIDUWlCxhpG7O1LQ94t+ZWS7rACjMW18i6MgjJICbsFSyFH/0Kso5Ho LtEFBGyR+b3pcEZJST+IIxbjY1m6xF9HyDJvJ8nYqthnyld1SfgGTHDfe+CeseRxV5Uz k7QA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lYEOAK74S7RKzE6p9TGoUnrSZ59dGlU6PiDgEqVSldA=; b=kYExbz50YBSSq22nLt5iUj+Li1yDQTI/O+d6Gj4Xou6EcOudsUMl/7Duv9psq3NKq4 1+R3/dx0kZZLXDSc2tB7QBIpqr0akxbuHCj+WWZe81o0cTcRvpq86DufPhxz3HudR6Hu Rahf/muEpAwO+IVU02yyh4QWhnKZNR5PGMa6+QrC4j/O+qqS+NxweKW//fY4dGmfT67l jTS+fRS1mBjyi5nuQs+kFvx5w3cAd211GNJkf0VkPFH4ujFFqJxLMBOagMyKxSmS3lMP +9fo4lagaVmpXVF8OUPblDtX0HLd8H8nKS0uqhUrmwerdvfFWNem9pnP0WDSyx4/prO0 9CiQ== X-Gm-Message-State: AOAM5307xsY2Eq9Sp9dzUjQg/ba0jAv8AQbU90OA/jQ2p1LXOtrigr3i uwmbp7FcfnEw1967/AvuzqIwmnX+Q74= X-Google-Smtp-Source: ABdhPJwYsjpjOmTOs4+qciEH+Gw7h2MHoJyPtrqgb/4eI9u10SZi2D3fxvD0fTDZABSiZzrD5NSbGg== X-Received: by 2002:a63:48:: with SMTP id 69mr9730751pga.479.1637756502782; Wed, 24 Nov 2021 04:21:42 -0800 (PST) Received: from localhost ([198.11.178.15]) by smtp.gmail.com with ESMTPSA id z3sm16904691pfh.79.2021.11.24.04.21.41 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 24 Nov 2021 04:21:42 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org, Lai Jiangshan , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [PATCH 07/12] KVM: X86: Add parameter huge_page_level to kvm_init_shadow_ept_mmu() Date: Wed, 24 Nov 2021 20:20:49 +0800 Message-Id: <20211124122055.64424-8-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20211124122055.64424-1-jiangshanlai@gmail.com> References: <20211124122055.64424-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan The level of supported large page on nEPT affects the rsvds_bits_mask. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu.h | 3 ++- arch/x86/kvm/mmu/mmu.c | 5 +++-- arch/x86/kvm/vmx/capabilities.h | 9 +++++++++ arch/x86/kvm/vmx/nested.c | 8 +++++--- 4 files changed, 19 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 97e13c2988b3..e9fbb2c8bbe2 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -71,7 +71,8 @@ void kvm_init_mmu(struct kvm_vcpu *vcpu); void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0, unsigned long cr4, u64 efer, gpa_t nested_cr3); void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, - bool accessed_dirty, gpa_t new_eptp); + int huge_page_level, bool accessed_dirty, + gpa_t new_eptp); bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu); int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code, u64 fault_address, char *insn, int insn_len); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8a371d6c2291..f5a1da112daf 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4886,7 +4886,8 @@ kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty, } void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, - bool accessed_dirty, gpa_t new_eptp) + int huge_page_level, bool accessed_dirty, + gpa_t new_eptp) { struct kvm_mmu *context = &vcpu->arch.guest_mmu; u8 level = vmx_eptp_page_walk_level(new_eptp); @@ -4913,7 +4914,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, update_permission_bitmask(context, true); update_pkru_bitmask(context); - reset_rsvds_bits_mask_ept(vcpu, context, execonly, max_huge_page_level); + reset_rsvds_bits_mask_ept(vcpu, context, execonly, huge_page_level); reset_ept_shadow_zero_bits_mask(vcpu, context, execonly); } EXPORT_SYMBOL_GPL(kvm_init_shadow_ept_mmu); diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h index 4705ad55abb5..c8029b7845b6 100644 --- a/arch/x86/kvm/vmx/capabilities.h +++ b/arch/x86/kvm/vmx/capabilities.h @@ -312,6 +312,15 @@ static inline bool cpu_has_vmx_ept_1g_page(void) return vmx_capability.ept & VMX_EPT_1GB_PAGE_BIT; } +static inline int ept_caps_to_lpage_level(u32 ept_caps) +{ + if (ept_caps & VMX_EPT_1GB_PAGE_BIT) + return PG_LEVEL_1G; + if (ept_caps & VMX_EPT_2MB_PAGE_BIT) + return PG_LEVEL_2M; + return PG_LEVEL_4K; +} + static inline bool cpu_has_vmx_ept_ad_bits(void) { return vmx_capability.ept & VMX_EPT_AD_BIT; diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index d8d0dbc4fc18..20e126de1c96 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -391,9 +391,11 @@ static void nested_ept_inject_page_fault(struct kvm_vcpu *vcpu, static void nested_ept_new_eptp(struct kvm_vcpu *vcpu) { - kvm_init_shadow_ept_mmu(vcpu, - to_vmx(vcpu)->nested.msrs.ept_caps & - VMX_EPT_EXECUTE_ONLY_BIT, + struct vcpu_vmx *vmx = to_vmx(vcpu); + bool execonly = vmx->nested.msrs.ept_caps & VMX_EPT_EXECUTE_ONLY_BIT; + int ept_lpage_level = ept_caps_to_lpage_level(vmx->nested.msrs.ept_caps); + + kvm_init_shadow_ept_mmu(vcpu, execonly, ept_lpage_level, nested_ept_ad_enabled(vcpu), nested_ept_get_eptp(vcpu)); } From patchwork Wed Nov 24 12:20:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12636915 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A63EC4332F for ; Wed, 24 Nov 2021 13:42:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349427AbhKXNpL (ORCPT ); Wed, 24 Nov 2021 08:45:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353742AbhKXNoA (ORCPT ); Wed, 24 Nov 2021 08:44:00 -0500 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C653C0698DB; Wed, 24 Nov 2021 04:21:49 -0800 (PST) Received: by mail-pl1-x636.google.com with SMTP id b13so1715217plg.2; Wed, 24 Nov 2021 04:21:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=i5bNnZoC1LpYO9iHguFLaJo4uTQOtNzWZkDr+ggBDuk=; b=muFGqxdu0qcsGB4QhQonCDLmAtvZblArP5w/iZ4qLVyR1cg5FoSXbPCidl/M/HtgjP tVMUi79fGZrHQoR80W0Q2kFeT8ZdcDjADUqv0AuP/4icjSDoUsnHvorVpzN8psXd6mHC 9SZpC6AbZeHfn3jnXXrvHHGAqSXm6Cay22hTVknuqIx5+35op7vzmV6yNT2BVHsymOh4 kVPSvNRgtQ+/jU0ALYldW/Pl7MeX8RySYeg5/7xTH8CIs84AV1il9PdsEmKGNZvrnEsa ItNfdE8xbuYbVpr6Eez1FYxDrIrWSbQnPS+MDiVh060GyderMf1x8hhtK/K1Gs0KtWE8 0NOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=i5bNnZoC1LpYO9iHguFLaJo4uTQOtNzWZkDr+ggBDuk=; b=7K0ioHtcRdtE7V2yPeeJrvuQQfX+1JVosV7W3zlktAaanTxhzmoLpdql5r7Tv3Mbtv e8fFNSsOl5/1nkS++Z59wYHapZ3oHUtknIeXlf+sGAMTwimYmbnNqjsil2S6aONY3oa+ VJFbnw7BN2Pq4QeiXwW+Mz0lDBRVz4/xqjg5VflcvSaIfsKTlobGHgWSj9v0+OfyIq+y s1S+/w5B/5QNfaxx12KvZ8JC8Wd7Xws1Djh/lomO82Oysrzf0rNCU0FFyj6/ZJf/q0Rn 9+DfOvr8WuEq67uSyUuhUyhIDifiHI8/aHXcmP0kiF9sxhFUpCVuhTkq5sHTjKkza1nG Uwxg== X-Gm-Message-State: AOAM531hN+q4KlI3iZNVxpqkAnBl64aPcVoCpyRh7c7yCHTQNJvZKCTY gHyN30VLpw+cut93WGj6NiPslv0qBUI= X-Google-Smtp-Source: ABdhPJzDLfzbLlfx7al7H+hWUiyBHMgVLNLXobHny+0OmoqrXTS5IQ3bmMW60+KpC/5t31yZOYZIbQ== X-Received: by 2002:a17:90b:1185:: with SMTP id gk5mr8157520pjb.113.1637756508683; Wed, 24 Nov 2021 04:21:48 -0800 (PST) Received: from localhost ([198.11.178.15]) by smtp.gmail.com with ESMTPSA id n16sm4546847pja.46.2021.11.24.04.21.47 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 24 Nov 2021 04:21:48 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org, Lai Jiangshan , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [PATCH 08/12] KVM: VMX: Use ept_caps_to_lpage_level() in hardware_setup() Date: Wed, 24 Nov 2021 20:20:50 +0800 Message-Id: <20211124122055.64424-9-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20211124122055.64424-1-jiangshanlai@gmail.com> References: <20211124122055.64424-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Using ept_caps_to_lpage_level is simpler. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/vmx/vmx.c | 12 ++---------- 1 file changed, 2 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index c6d9c50ea5d4..3b07f5bd86b1 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7693,7 +7693,7 @@ static __init int hardware_setup(void) { unsigned long host_bndcfgs; struct desc_ptr dt; - int r, ept_lpage_level; + int r; store_idt(&dt); host_idt_base = dt.address; @@ -7790,16 +7790,8 @@ static __init int hardware_setup(void) kvm_mmu_set_ept_masks(enable_ept_ad_bits, cpu_has_vmx_ept_execute_only()); - if (!enable_ept) - ept_lpage_level = 0; - else if (cpu_has_vmx_ept_1g_page()) - ept_lpage_level = PG_LEVEL_1G; - else if (cpu_has_vmx_ept_2m_page()) - ept_lpage_level = PG_LEVEL_2M; - else - ept_lpage_level = PG_LEVEL_4K; kvm_configure_mmu(enable_ept, 0, vmx_get_max_tdp_level(), - ept_lpage_level); + ept_caps_to_lpage_level(vmx_capability.ept)); /* * Only enable PML when hardware supports PML feature, and both EPT From patchwork Wed Nov 24 12:20:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12636939 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 734E8C433F5 for ; Wed, 24 Nov 2021 13:44:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348510AbhKXNrO (ORCPT ); Wed, 24 Nov 2021 08:47:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37028 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345347AbhKXNpL (ORCPT ); Wed, 24 Nov 2021 08:45:11 -0500 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7436FC04A4FA; Wed, 24 Nov 2021 04:21:55 -0800 (PST) Received: by mail-pj1-x1031.google.com with SMTP id y14-20020a17090a2b4e00b001a5824f4918so4891232pjc.4; Wed, 24 Nov 2021 04:21:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wd98xQLQ/pXf9ib4u6bLtTjsr2IvaHY/2gqbkIzN0/4=; b=nDw17EGrKlKzgCNbqTAO4qrT4BcO03O3lXSTZZgQiTsbRqo8xhfSlNWM1850aAgT05 0r9DGKtSvIWT9wVb9O2SGj4drBh7nxh5tZk1UCf85gTPNwBwjwv9aNEbJNbt15SLKyi5 mOh0SWwcFtOfvzaSavVcBGzOUqorZmnM4szZgWu2pX2ri0QL9fuZbiOyLcSO2jDoh/yK gNh9Z1ZsW/YPvG/MbwGskz/UWgvtU6hcDfISO89e9yV5aZY2H7RZOhMrdAFqlqtfosWU TascdGCwJh/e81igDb3JbEM0ZTr8KB38xNOKRAT16R1hMX9/7aPcRmgmIqvmGcjUgFUh S1lQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wd98xQLQ/pXf9ib4u6bLtTjsr2IvaHY/2gqbkIzN0/4=; b=WK/o95BINfeqNYHE4LBE5t1M6fjsZig2kJvZhyfdQtQL5mV1wxXP/gplIZEb/TZ5EA Nd5SE7xGTes3FoVVcZbnjZbS9XUZLVAKtitbORp6RBRytXVpOH4E6XEb7YuWg77qX0UP yi0iR1FAzQJcnZG4jM8eHaWsduEihjiJWNgIBMqR5wQq7SX+3cicm0fTnRKQzRKAgXdP n7IyiZLuMA48hlh4C3l+czoai0owhkzdu2tfAniWsXF8b7iNbv4Nym4avrzAhQtGd04z 5jaUM0CaSOyJwd/hWWTkZl5NX4vhGATRNexilWhA27uia+8xwe55AMYW1KJHYLQYUk25 QNHw== X-Gm-Message-State: AOAM532n6VuZ1F90odMnXicig0ZUKAnQeqT0nc6wqgQaediLb0kEO4Yc vojpaKD5QvT8A7EhDcZVt9zg3QIMsww= X-Google-Smtp-Source: ABdhPJzOeUthB7SJFTxs0C6WTTdaax8E5IuqrJa5n3HsbzyKcEvbOYSslpfKJ/M20wTS7LoXauXWtA== X-Received: by 2002:a17:902:9684:b0:143:cc70:6472 with SMTP id n4-20020a170902968400b00143cc706472mr17766695plp.70.1637756514834; Wed, 24 Nov 2021 04:21:54 -0800 (PST) Received: from localhost ([198.11.178.15]) by smtp.gmail.com with ESMTPSA id na13sm5293204pjb.11.2021.11.24.04.21.53 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 24 Nov 2021 04:21:54 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org, Lai Jiangshan , Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , linux-doc@vger.kernel.org Subject: [PATCH 09/12] KVM: X86: Rename gpte_is_8_bytes to has_4_byte_gpte and invert the direction Date: Wed, 24 Nov 2021 20:20:51 +0800 Message-Id: <20211124122055.64424-10-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20211124122055.64424-1-jiangshanlai@gmail.com> References: <20211124122055.64424-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan It's a bit more confusing to make gpte_is_8_bytes=1 if there are no guest PTEs at all. And has_4_byte_gpte is only changed to be only set when there are guest PTEs and the guest PTE size is 4 bytes. So when nonpaping, the value is not inverted, it is still false. Suggested-by: Paolo Bonzini Signed-off-by: Lai Jiangshan --- Documentation/virt/kvm/mmu.rst | 8 ++++---- arch/x86/include/asm/kvm_host.h | 8 ++++---- arch/x86/kvm/mmu/mmu.c | 12 ++++++------ arch/x86/kvm/mmu/mmutrace.h | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- 5 files changed, 16 insertions(+), 16 deletions(-) diff --git a/Documentation/virt/kvm/mmu.rst b/Documentation/virt/kvm/mmu.rst index f60f5488e121..5b1ebad24c77 100644 --- a/Documentation/virt/kvm/mmu.rst +++ b/Documentation/virt/kvm/mmu.rst @@ -161,7 +161,7 @@ Shadow pages contain the following information: If clear, this page corresponds to a guest page table denoted by the gfn field. role.quadrant: - When role.gpte_is_8_bytes=0, the guest uses 32-bit gptes while the host uses 64-bit + When role.has_4_byte_gpte=1, the guest uses 32-bit gptes while the host uses 64-bit sptes. That means a guest page table contains more ptes than the host, so multiple shadow pages are needed to shadow one guest page. For first-level shadow pages, role.quadrant can be 0 or 1 and denotes the @@ -177,9 +177,9 @@ Shadow pages contain the following information: The page is invalid and should not be used. It is a root page that is currently pinned (by a cpu hardware register pointing to it); once it is unpinned it will be destroyed. - role.gpte_is_8_bytes: - Reflects the size of the guest PTE for which the page is valid, i.e. '1' - if 64-bit gptes are in use, '0' if 32-bit gptes are in use. + role.has_4_byte_gpte: + Reflects the size of the guest PTE for which the page is valid, i.e. '0' + if direct map or 64-bit gptes are in use, '1' if 32-bit gptes are in use. role.efer_nx: Contains the value of efer.nx for which the page is valid. role.cr0_wp: diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index e382596baa1d..01e50703c878 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -296,14 +296,14 @@ struct kvm_kernel_irq_routing_entry; * * - invalid shadow pages are not accounted, so the bits are effectively 18 * - * - quadrant will only be used if gpte_is_8_bytes=0 (non-PAE paging); + * - quadrant will only be used if has_4_byte_gpte=1 (non-PAE paging); * execonly and ad_disabled are only used for nested EPT which has - * gpte_is_8_bytes=1. Therefore, 2 bits are always unused. + * has_4_byte_gpte=0. Therefore, 2 bits are always unused. * * - the 4 bits of level are effectively limited to the values 2/3/4/5, * as 4k SPs are not tracked (allowed to go unsync). In addition non-PAE * paging has exactly one upper level, making level completely redundant - * when gpte_is_8_bytes=0. + * when has_4_byte_gpte=1. * * - on top of this, smep_andnot_wp and smap_andnot_wp are only set if * cr0_wp=0, therefore these three bits only give rise to 5 possibilities. @@ -315,7 +315,7 @@ union kvm_mmu_page_role { u32 word; struct { unsigned level:4; - unsigned gpte_is_8_bytes:1; + unsigned has_4_byte_gpte:1; unsigned quadrant:2; unsigned direct:1; unsigned access:3; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f5a1da112daf..9fb9927264d8 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2077,7 +2077,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, role.level = level; role.direct = direct; role.access = access; - if (!direct_mmu && !role.gpte_is_8_bytes) { + if (role.has_4_byte_gpte) { quadrant = gaddr >> (PAGE_SHIFT + (PT64_PT_BITS * level)); quadrant &= (1 << ((PT32_PT_BITS - PT64_PT_BITS) * level)) - 1; role.quadrant = quadrant; @@ -4727,7 +4727,7 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, role.base.ad_disabled = (shadow_accessed_mask == 0); role.base.level = kvm_mmu_get_tdp_level(vcpu); role.base.direct = true; - role.base.gpte_is_8_bytes = true; + role.base.has_4_byte_gpte = false; return role; } @@ -4772,7 +4772,7 @@ kvm_calc_shadow_root_page_role_common(struct kvm_vcpu *vcpu, role.base.smep_andnot_wp = role.ext.cr4_smep && !____is_cr0_wp(regs); role.base.smap_andnot_wp = role.ext.cr4_smap && !____is_cr0_wp(regs); - role.base.gpte_is_8_bytes = ____is_cr0_pg(regs) && ____is_cr4_pae(regs); + role.base.has_4_byte_gpte = ____is_cr0_pg(regs) && !____is_cr4_pae(regs); return role; } @@ -4871,7 +4871,7 @@ kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty, role.base.smm = vcpu->arch.root_mmu.mmu_role.base.smm; role.base.level = level; - role.base.gpte_is_8_bytes = true; + role.base.has_4_byte_gpte = false; role.base.direct = false; role.base.ad_disabled = !accessed_dirty; role.base.guest_mode = true; @@ -5155,7 +5155,7 @@ static bool detect_write_misaligned(struct kvm_mmu_page *sp, gpa_t gpa, gpa, bytes, sp->role.word); offset = offset_in_page(gpa); - pte_size = sp->role.gpte_is_8_bytes ? 8 : 4; + pte_size = sp->role.has_4_byte_gpte ? 4 : 8; /* * Sometimes, the OS only writes the last one bytes to update status @@ -5179,7 +5179,7 @@ static u64 *get_written_sptes(struct kvm_mmu_page *sp, gpa_t gpa, int *nspte) page_offset = offset_in_page(gpa); level = sp->role.level; *nspte = 1; - if (!sp->role.gpte_is_8_bytes) { + if (sp->role.has_4_byte_gpte) { page_offset <<= 1; /* 32->64 */ /* * A 32-bit pde maps 4MB while the shadow pdes map diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h index b8151bbca36a..de5e8e4e1aa7 100644 --- a/arch/x86/kvm/mmu/mmutrace.h +++ b/arch/x86/kvm/mmu/mmutrace.h @@ -35,7 +35,7 @@ " %snxe %sad root %u %s%c", \ __entry->mmu_valid_gen, \ __entry->gfn, role.level, \ - role.gpte_is_8_bytes ? 8 : 4, \ + role.has_4_byte_gpte ? 4 : 8, \ role.quadrant, \ role.direct ? " direct" : "", \ access_str[role.access], \ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 377a96718a2e..fb602c025d9d 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -165,7 +165,7 @@ static union kvm_mmu_page_role page_role_for_level(struct kvm_vcpu *vcpu, role = vcpu->arch.mmu->mmu_role.base; role.level = level; role.direct = true; - role.gpte_is_8_bytes = true; + role.has_4_byte_gpte = false; role.access = ACC_ALL; role.ad_disabled = !shadow_accessed_mask; From patchwork Wed Nov 24 12:20:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12636941 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F257C43219 for ; Wed, 24 Nov 2021 13:46:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351832AbhKXNt3 (ORCPT ); Wed, 24 Nov 2021 08:49:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345655AbhKXNrd (ORCPT ); Wed, 24 Nov 2021 08:47:33 -0500 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B71AFC04CC95; Wed, 24 Nov 2021 04:22:02 -0800 (PST) Received: by mail-pj1-x1032.google.com with SMTP id n15-20020a17090a160f00b001a75089daa3so4914768pja.1; Wed, 24 Nov 2021 04:22:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lxRHUxGSkSQXehchPRqK/OWhl6PvsYSKhbqxCMpqdXc=; b=h2/shSb1uy0pc7QPKfGdipbuEmLdi8hryEYMLwz8yo8+IkmNkA+OdOQnHq8gnu4MnO zRQvgBi/i50xVh2Q4Grqsk8mrgo1H+4sCCg2aCA5XPuunbAI6XvyF+6e0NCvx9iVdpp/ ugYpBngAYgsdJp66fmVEdX6BRY2Q8zzOVEEZNB2wnrPx8F2Vx/ew1XNy+twDndmBmfKA g0gmxRn5bgI1edUSrzy5bUwW/N4U/5d6F0SL8lQRDvxYrq0XWQmONOivk6IwyjdVstMN OpSt4hLDSGpPgwk18xzcHqYBB3oZ6+gSb3XAtN9LPODtfDy1h7fgMuqFVTXAimHoeVDL DDPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lxRHUxGSkSQXehchPRqK/OWhl6PvsYSKhbqxCMpqdXc=; b=xo3gP5WEdSQrXXM+X+az1rJNYmDfwO9JTBcS+4UdsI//wCAX6ehyVTZbcQ6MJUwH3/ 2TXs1Iltku33ijOhCbszl8d5NvMIhZk+MU9vC7GxZs1Gw0RrSmUpkNvdtEMySlXacBSh WXGEjYbQd6TWUTkkQPLXqnHrLj+LYMqeuQMImKGKM0s4fiRi4Ox097x6ISgpoycqS6pn 7pvNzHRIRxqe1OxGLUo/2zX7YnfcdyabI11C9Yi9ZNIauIJrooL9HX8zYRr+kvIQ75Ql xRvLUkGduTZRZCqRAtwBhMxOMMi8igcpH3/8ALpEe8K2nyZYxH/rJfmBJ2AJuaRMkI8d DvVA== X-Gm-Message-State: AOAM531Y9BzJ0e1PKrHm33JfmeI1ic/bI0W89I4PKMbnKv8V8hmBfeRC 56JMKeZJSG5sLRQTnfhxzTLCZF+T5RE= X-Google-Smtp-Source: ABdhPJxoqRaN0zOI/6JvZaMIjMwZpYRfPZaj4SveV0iDhsk/xfqEI3N9gGxmXbGsSbGpZHpBGddKhA== X-Received: by 2002:a17:903:2352:b0:142:76bc:de3b with SMTP id c18-20020a170903235200b0014276bcde3bmr18098433plh.36.1637756521936; Wed, 24 Nov 2021 04:22:01 -0800 (PST) Received: from localhost ([198.11.178.15]) by smtp.gmail.com with ESMTPSA id m6sm10944871pgc.17.2021.11.24.04.22.00 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 24 Nov 2021 04:22:01 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org, Lai Jiangshan , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [PATCH 10/12] KVM: X86: Remove mmu parameter from load_pdptrs() Date: Wed, 24 Nov 2021 20:20:52 +0800 Message-Id: <20211124122055.64424-11-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20211124122055.64424-1-jiangshanlai@gmail.com> References: <20211124122055.64424-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan It uses vcpu->arch.walk_mmu always. Signed-off-by: Lai Jiangshan --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/svm/nested.c | 4 ++-- arch/x86/kvm/svm/svm.c | 2 +- arch/x86/kvm/vmx/nested.c | 4 ++-- arch/x86/kvm/x86.c | 12 ++++++------ 5 files changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 01e50703c878..c106ad7efe23 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1591,7 +1591,7 @@ void kvm_mmu_zap_all(struct kvm *kvm); void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen); void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned long kvm_nr_mmu_pages); -int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3); +int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3); int emulator_write_phys(struct kvm_vcpu *vcpu, gpa_t gpa, const void *val, int bytes); diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 598843cfe6c4..6bcea96cdb92 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -461,7 +461,7 @@ static int nested_svm_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3, return -EINVAL; if (reload_pdptrs && !nested_npt && is_pae_paging(vcpu) && - CC(!load_pdptrs(vcpu, vcpu->arch.walk_mmu, cr3))) + CC(!load_pdptrs(vcpu, cr3))) return -EINVAL; if (!nested_npt) @@ -1518,7 +1518,7 @@ static bool svm_get_nested_state_pages(struct kvm_vcpu *vcpu) * the guest CR3 might be restored prior to setting the nested * state which can lead to a load of wrong PDPTRs. */ - if (CC(!load_pdptrs(vcpu, vcpu->arch.walk_mmu, vcpu->arch.cr3))) + if (CC(!load_pdptrs(vcpu, vcpu->arch.cr3))) return false; if (!nested_svm_vmrun_msrpm(svm)) { diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index d855ba664fc2..e0c18682cbd0 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1588,7 +1588,7 @@ static void svm_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg) switch (reg) { case VCPU_EXREG_PDPTR: BUG_ON(!npt_enabled); - load_pdptrs(vcpu, vcpu->arch.walk_mmu, kvm_read_cr3(vcpu)); + load_pdptrs(vcpu, kvm_read_cr3(vcpu)); break; default: KVM_BUG_ON(1, vcpu->kvm); diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 20e126de1c96..d97588bebaaf 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -1097,7 +1097,7 @@ static int nested_vmx_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3, * must not be dereferenced. */ if (reload_pdptrs && !nested_ept && is_pae_paging(vcpu) && - CC(!load_pdptrs(vcpu, vcpu->arch.walk_mmu, cr3))) { + CC(!load_pdptrs(vcpu, cr3))) { *entry_failure_code = ENTRY_FAIL_PDPTE; return -EINVAL; } @@ -3142,7 +3142,7 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) * the guest CR3 might be restored prior to setting the nested * state which can lead to a load of wrong PDPTRs. */ - if (CC(!load_pdptrs(vcpu, vcpu->arch.walk_mmu, vcpu->arch.cr3))) + if (CC(!load_pdptrs(vcpu, vcpu->arch.cr3))) return false; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 25e278ba4666..f94b0ebe9a4d 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -798,8 +798,9 @@ static inline u64 pdptr_rsvd_bits(struct kvm_vcpu *vcpu) /* * Load the pae pdptrs. Return 1 if they are all valid, 0 otherwise. */ -int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3) +int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3) { + struct kvm_mmu *mmu = vcpu->arch.walk_mmu; gfn_t pdpt_gfn = cr3 >> PAGE_SHIFT; gpa_t real_gpa; int i; @@ -887,7 +888,7 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) #endif if (!(vcpu->arch.efer & EFER_LME) && (cr0 & X86_CR0_PG) && is_pae(vcpu) && ((cr0 ^ old_cr0) & pdptr_bits) && - !load_pdptrs(vcpu, vcpu->arch.walk_mmu, kvm_read_cr3(vcpu))) + !load_pdptrs(vcpu, kvm_read_cr3(vcpu))) return 1; if (!(cr0 & X86_CR0_PG) && kvm_read_cr4_bits(vcpu, X86_CR4_PCIDE)) @@ -1063,8 +1064,7 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) return 1; } else if (is_paging(vcpu) && (cr4 & X86_CR4_PAE) && ((cr4 ^ old_cr4) & pdptr_bits) - && !load_pdptrs(vcpu, vcpu->arch.walk_mmu, - kvm_read_cr3(vcpu))) + && !load_pdptrs(vcpu, kvm_read_cr3(vcpu))) return 1; if ((cr4 & X86_CR4_PCIDE) && !(old_cr4 & X86_CR4_PCIDE)) { @@ -1153,7 +1153,7 @@ int kvm_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3) if (kvm_vcpu_is_illegal_gpa(vcpu, cr3)) return 1; - if (is_pae_paging(vcpu) && !load_pdptrs(vcpu, vcpu->arch.walk_mmu, cr3)) + if (is_pae_paging(vcpu) && !load_pdptrs(vcpu, cr3)) return 1; if (cr3 != kvm_read_cr3(vcpu)) @@ -10553,7 +10553,7 @@ static int __set_sregs_common(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs, if (update_pdptrs) { idx = srcu_read_lock(&vcpu->kvm->srcu); if (is_pae_paging(vcpu)) { - load_pdptrs(vcpu, vcpu->arch.walk_mmu, kvm_read_cr3(vcpu)); + load_pdptrs(vcpu, kvm_read_cr3(vcpu)); *mmu_reset_needed = 1; } srcu_read_unlock(&vcpu->kvm->srcu, idx); From patchwork Wed Nov 24 12:20:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12636943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 337A6C4332F for ; Wed, 24 Nov 2021 13:46:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354119AbhKXNtz (ORCPT ); Wed, 24 Nov 2021 08:49:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38412 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354127AbhKXNsu (ORCPT ); Wed, 24 Nov 2021 08:48:50 -0500 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B3C5C0698FE; Wed, 24 Nov 2021 04:22:08 -0800 (PST) Received: by mail-pj1-x102d.google.com with SMTP id o6-20020a17090a0a0600b001a64b9a11aeso2506963pjo.3; Wed, 24 Nov 2021 04:22:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ipgekMz1dQYxmFsnp10QERJe8oTU3RcrTtPUiJD8omE=; b=UngOg6uYgkJ7NZYcYZpAzFTHxAaYuEUUwRKq4T3LC8TY8DEVyha30lsp2U+1reag2l IJL9WsBg9Xw64vX5vt1MRg7ddptjkQMv+DK/3IBTK7cjFIV0NbMF8e46CpsvK+EtHwci eBv2MkvWdTuhHSakICYcT4JcQaTkDMyXAN8NPaaobCsdwvWCjMv0gX+Y/ACk1gipPPm9 Hj0keMMaOP9cPyCb30b8hyV5QtyP8k9HQVUp7IZS8s+kQjnQlQPbrIbr+8AoMmHxdEWW YYx6+eHxpFPz5mY6D7zsnI8UVV/VumFBbri5uOZBcfvGNhH6OE2rqfz5BKDMNgxA3zL3 C69A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ipgekMz1dQYxmFsnp10QERJe8oTU3RcrTtPUiJD8omE=; b=Rl3Q9H7plPynQHS4nj/X9gZWk2bLCaVFYuA2ASPaiUXUBI8GoccWovHcdpJv8bbsTF lV7aUH+jREB0EkgSmLohPr0uecOlKsZrWUB2naUHDDYqcTIpOYxvJMjW5WPXxxFiCn97 uXLgpyMbIGxHCXo/wN1CeFLIMNGHH2rRxQ8mlUkv0lUW+1RjeKtqR2smNalPmR/zKdUa euIl1dCv2Ty2+6Sn8KZfdx9fY+DNKrMJEvqseQjys4Qapx/jZf+0yEURpAgBNBReNVFP tll95SUkefq1eRwpG8aog8KZw7IhybK4V24/lRQg1mX4IGJNqLzDr/k57bLVdvmseT8/ kRgw== X-Gm-Message-State: AOAM531lxoP+N6yxLdJYW2t1GetQG2fHU3eoJKdMtgC22xtFRL6J9y7N K70HVW3rTfeqxtd9DoNp0sX7Vx1nXeA= X-Google-Smtp-Source: ABdhPJzMNQQGJlZPjoa6d5VTTJH5BC84NpuwHjSfJHej66TPu/R3R4JsayFZDCqWOs5PeOPxbfo+TA== X-Received: by 2002:a17:90b:1d0f:: with SMTP id on15mr8243263pjb.144.1637756527768; Wed, 24 Nov 2021 04:22:07 -0800 (PST) Received: from localhost ([198.11.178.15]) by smtp.gmail.com with ESMTPSA id o2sm17110436pfu.206.2021.11.24.04.22.06 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 24 Nov 2021 04:22:07 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org, Lai Jiangshan , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [PATCH 11/12] KVM: X86: Check root_level only in fast_pgd_switch() Date: Wed, 24 Nov 2021 20:20:53 +0800 Message-Id: <20211124122055.64424-12-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20211124122055.64424-1-jiangshanlai@gmail.com> References: <20211124122055.64424-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan If root_level >= 4, shadow_root_level must be >= 4 too. Checking only root_level can reduce a check. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 9fb9927264d8..1dc8bfd12ecd 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4136,8 +4136,7 @@ static bool fast_pgd_switch(struct kvm_vcpu *vcpu, gpa_t new_pgd, * having to deal with PDPTEs. We may add support for 32-bit hosts/VMs * later if necessary. */ - if (mmu->shadow_root_level >= PT64_ROOT_4LEVEL && - mmu->root_level >= PT64_ROOT_4LEVEL) + if (mmu->root_level >= PT64_ROOT_4LEVEL) return cached_root_available(vcpu, new_pgd, new_role); return false; From patchwork Wed Nov 24 12:20:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12636945 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B88DBC433EF for ; Wed, 24 Nov 2021 13:47:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354371AbhKXNu1 (ORCPT ); Wed, 24 Nov 2021 08:50:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354190AbhKXNs6 (ORCPT ); Wed, 24 Nov 2021 08:48:58 -0500 Received: from mail-pf1-x432.google.com (mail-pf1-x432.google.com [IPv6:2607:f8b0:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 31DF8C04CCA6; Wed, 24 Nov 2021 04:22:14 -0800 (PST) Received: by mail-pf1-x432.google.com with SMTP id o4so2412614pfp.13; Wed, 24 Nov 2021 04:22:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OXQJ/Hg7JktHdh34i19iwQxJ93YDj8UHhHLVZ7kWpcY=; b=XEi4y/CAQgVlQVCvCubc1j39iVAT/KctP+Zr+IKXxwujt30nlQVOPkresY6/jtkF2t f4AfqIJLxHQ0czBKFg7m9gAOPskFijLKFcO21vfPmhsCZ6pHbnrU5Sp9ug4RdqTIfgY3 iCpYXt8C6CpnC3gwuIFK4bG7D8bdayvB/Kg0dmfemjZpCg6zm3FrjpBhkvb47j73NDd0 QU6ZaHr+YraYS/I9lYWdV7RE4O2Z9cLJ+JUY1IUjf8zHEMWkoICsFbhI5GF6RmCNi98O pV7aI1BpjBB1SiBXG5MZvWngcbKliSX4E3tdGR/ZzIl5eHV6GNhU5aWVf+xXaC2KSEqX 1MLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OXQJ/Hg7JktHdh34i19iwQxJ93YDj8UHhHLVZ7kWpcY=; b=6/QnUzx0eJScdTl7fnOYOblLLGqJQtXWbrsxDZlxzfGynwCWw+gs40SN3KMjdha1Lj ZPM/sIau2OuuKnjx4DytAlpB8PmPIVhs/PBhLvGxgV1EFv5tACfc36yIpltbm/cMGZqm HGoKKzsKyZF9yjPX6nG89PP2VDgVHEysSTo0qfMvmGBAPhYQbmOmshxw9Yv1vP2WCDRw TQF9B8DuXrHaUWKUq8iGGq7vSdXx9xxZiybwkTHWZYyDNgAktFR/Vm1W0NWSpj0lr7Sr LjmDMGIxLPEU/JTNhDWLHEY8OW1aDEVrFXs5yKaPexNYmsqjik5d6RZ7JSYnUBmsVdl6 s3dw== X-Gm-Message-State: AOAM530jgPAoGTGkNIC4EhS55BBqavKImev6bK3y2AMZf8Ts/dfgBTT7 teuiYrfDQFSUcgNkeMLK9jvVip+gyjI= X-Google-Smtp-Source: ABdhPJxxXqJmN5iCp4ZmgmlA4C44r8063lAnHxAVKEynSAwi+2QYkAON4RiveU4qNwQM6ZOj2W+XjQ== X-Received: by 2002:a05:6a00:1584:b0:49f:e5dd:f904 with SMTP id u4-20020a056a00158400b0049fe5ddf904mr5654163pfk.55.1637756533595; Wed, 24 Nov 2021 04:22:13 -0800 (PST) Received: from localhost ([198.11.178.15]) by smtp.gmail.com with ESMTPSA id t13sm16416367pfl.98.2021.11.24.04.22.12 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 24 Nov 2021 04:22:13 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org, Lai Jiangshan , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [PATCH 12/12] KVM: X86: Walk shadow page starting with shadow_root_level Date: Wed, 24 Nov 2021 20:20:54 +0800 Message-Id: <20211124122055.64424-13-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20211124122055.64424-1-jiangshanlai@gmail.com> References: <20211124122055.64424-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Walking from the root page of the shadow page table should start with the level of the shadow page table: shadow_root_level. Also change a small defect in audit_mappings(), it is believed that the current walking level is more valuable to print. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu_audit.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu_audit.c b/arch/x86/kvm/mmu/mmu_audit.c index 9e7dcf999f08..6bbbf85b3e46 100644 --- a/arch/x86/kvm/mmu/mmu_audit.c +++ b/arch/x86/kvm/mmu/mmu_audit.c @@ -63,7 +63,7 @@ static void mmu_spte_walk(struct kvm_vcpu *vcpu, inspect_spte_fn fn) hpa_t root = vcpu->arch.mmu->root_hpa; sp = to_shadow_page(root); - __mmu_spte_walk(vcpu, sp, fn, vcpu->arch.mmu->root_level); + __mmu_spte_walk(vcpu, sp, fn, vcpu->arch.mmu->shadow_root_level); return; } @@ -119,8 +119,7 @@ static void audit_mappings(struct kvm_vcpu *vcpu, u64 *sptep, int level) hpa = pfn << PAGE_SHIFT; if ((*sptep & PT64_BASE_ADDR_MASK) != hpa) audit_printk(vcpu->kvm, "levels %d pfn %llx hpa %llx " - "ent %llxn", vcpu->arch.mmu->root_level, pfn, - hpa, *sptep); + "ent %llxn", level, pfn, hpa, *sptep); } static void inspect_spte_has_rmap(struct kvm *kvm, u64 *sptep)