From patchwork Thu Feb 16 15:41:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13143394 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E465FC61DA4 for ; Thu, 16 Feb 2023 15:40:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229886AbjBPPkW (ORCPT ); Thu, 16 Feb 2023 10:40:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41570 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229770AbjBPPkS (ORCPT ); Thu, 16 Feb 2023 10:40:18 -0500 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 218A15454A; Thu, 16 Feb 2023 07:40:17 -0800 (PST) Received: by mail-pj1-x102a.google.com with SMTP id w20-20020a17090a8a1400b00233d7314c1cso6216058pjn.5; Thu, 16 Feb 2023 07:40:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=HCcOcMkEq2Tra6a8d/G0i2aHL53mXB3M74OOhM//0hw=; b=MR03P9Ev0arYYfiMp0Ecem9iF8DiVJK7R70dT4rfGye+RWsDz5i84NmZNDlvQlrEVU sbHSNPTG9SHVVSRW2CcRykuciW7B5dcOMeNc915lPRQ1xto3e6nFLpVVqvWEn2BRJAPA f40qQM6/4bOe+3p6D5+FPdzyrz+m841w7eGrnQaRXSI0V/G4clTxFTQzKX8+U2vqkIjW cuwlYM8Eu+CG0Dq7Lv3OYFEOu83i5b7gsOACC3Bk+Ouuc1bktIX83Oxih42b/Nj+ff0s dugPoR2AbZDVWPiuY/OezRhTAPj9IeU7XkpAmjnFSsrCFfx7lSVsYcZUi++EXNsoC0e6 CNJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HCcOcMkEq2Tra6a8d/G0i2aHL53mXB3M74OOhM//0hw=; b=e3dM/HtJev+i/6HJ5AuqBX/fhO+gf4FY40zpo/UchzOhafR/likDVygdeUkv0J6Awm pGc0Pwi5xr/jI2fzlzPIkd3QKDsYkeAJuEXk2tYsJbNVX+/hWc+LNrlpOPxsl4EnBg1k Iu3VKgYndPRrnue3FO2okSlnDhKndyB+JdOVEuhRUD+TcDmkWFxL6eIlqHn6cj9PMjXL sgfLAIs9CFG5wTGZ9eG1pBBouOKilqKxOwyqomv+J4zEcGxRuaqHkLzUSo1gznUpbKR/ KVzI/Tu4WvA10gQZrTdyaIpbXvxpWZegmN0gVwt2nN4OUOowYVIYi6THyIN17OGRbbOQ iFYw== X-Gm-Message-State: AO0yUKXhLhdhp1M1UNxpsylSk0lu/F7DmwAU7dUIqydiiL7PYSQJXzC1 TVpLW2E5uWkPLKe2ZBDnX5Dwq9B2D0s= X-Google-Smtp-Source: AK7set/sVIUcEP5xuW10815z34yMBhvqfia0L6hUetjA98un5/aBkot42GLz5S787/RyAFrbCevJ3w== X-Received: by 2002:a17:902:fa0c:b0:199:11c3:cc4f with SMTP id la12-20020a170902fa0c00b0019911c3cc4fmr5606272plb.44.1676562016256; Thu, 16 Feb 2023 07:40:16 -0800 (PST) Received: from localhost ([198.11.178.15]) by smtp.gmail.com with ESMTPSA id f30-20020a63755e000000b004fb11a7f2d4sm1341772pgn.57.2023.02.16.07.40.15 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Feb 2023 07:40:15 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V3 01/14] KVM: x86/mmu: Use 64-bit address to invalidate to fix a subtle bug Date: Thu, 16 Feb 2023 23:41:07 +0800 Message-Id: <20230216154115.710033-2-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230216154115.710033-1-jiangshanlai@gmail.com> References: <20230216154115.710033-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan FNAME(invlpg)() and kvm_mmu_invalidate_gva() take a gva_t, i.e. unsigned long, as the type of the address to invalidate. On 32-bit kernels, the upper 32 bits of the GPA will get dropped when an L2 GPA address is to invalidate in the shadowed TDP MMU. Convert it to u64 to fix the problem. Reported-by: Sean Christopherson Signed-off-by: Lai Jiangshan --- arch/x86/include/asm/kvm_host.h | 6 +++--- arch/x86/kvm/mmu/mmu.c | 16 ++++++++-------- arch/x86/kvm/mmu/paging_tmpl.h | 7 ++++--- arch/x86/kvm/x86.c | 4 ++-- 4 files changed, 17 insertions(+), 16 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 4d2bc08794e4..5466f4152c67 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -443,7 +443,7 @@ struct kvm_mmu { struct x86_exception *exception); int (*sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp); - void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa); + void (*invlpg)(struct kvm_vcpu *vcpu, u64 addr, hpa_t root_hpa); struct kvm_mmu_root_info root; union kvm_cpu_role cpu_role; union kvm_mmu_page_role root_role; @@ -2025,8 +2025,8 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu); int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 error_code, void *insn, int insn_len); void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva); -void kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, - gva_t gva, hpa_t root_hpa); +void kvm_mmu_invalidate_addr(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, + u64 addr, hpa_t root_hpa); void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid); void kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t new_pgd); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c91ee2927dd7..91f8e1d1d4cc 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5706,25 +5706,25 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err } EXPORT_SYMBOL_GPL(kvm_mmu_page_fault); -void kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, - gva_t gva, hpa_t root_hpa) +void kvm_mmu_invalidate_addr(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, + u64 addr, hpa_t root_hpa) { int i; /* It's actually a GPA for vcpu->arch.guest_mmu. */ if (mmu != &vcpu->arch.guest_mmu) { /* INVLPG on a non-canonical address is a NOP according to the SDM. */ - if (is_noncanonical_address(gva, vcpu)) + if (is_noncanonical_address(addr, vcpu)) return; - static_call(kvm_x86_flush_tlb_gva)(vcpu, gva); + static_call(kvm_x86_flush_tlb_gva)(vcpu, addr); } if (!mmu->invlpg) return; if (root_hpa == INVALID_PAGE) { - mmu->invlpg(vcpu, gva, mmu->root.hpa); + mmu->invlpg(vcpu, addr, mmu->root.hpa); /* * INVLPG is required to invalidate any global mappings for the VA, @@ -5739,15 +5739,15 @@ void kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, */ for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) if (VALID_PAGE(mmu->prev_roots[i].hpa)) - mmu->invlpg(vcpu, gva, mmu->prev_roots[i].hpa); + mmu->invlpg(vcpu, addr, mmu->prev_roots[i].hpa); } else { - mmu->invlpg(vcpu, gva, root_hpa); + mmu->invlpg(vcpu, addr, root_hpa); } } void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva) { - kvm_mmu_invalidate_gva(vcpu, vcpu->arch.walk_mmu, gva, INVALID_PAGE); + kvm_mmu_invalidate_addr(vcpu, vcpu->arch.walk_mmu, gva, INVALID_PAGE); ++vcpu->stat.invlpg; } EXPORT_SYMBOL_GPL(kvm_mmu_invlpg); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 57f0b75c80f9..c7b1de064be5 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -887,7 +887,8 @@ static gpa_t FNAME(get_level1_sp_gpa)(struct kvm_mmu_page *sp) return gfn_to_gpa(sp->gfn) + offset * sizeof(pt_element_t); } -static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) +/* Note, @addr is a GPA when invlpg() invalidates an L2 GPA translation in shadowed TDP */ +static void FNAME(invlpg)(struct kvm_vcpu *vcpu, u64 addr, hpa_t root_hpa) { struct kvm_shadow_walk_iterator iterator; struct kvm_mmu_page *sp; @@ -895,7 +896,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) int level; u64 *sptep; - vcpu_clear_mmio_info(vcpu, gva); + vcpu_clear_mmio_info(vcpu, addr); /* * No need to check return value here, rmap_can_add() can @@ -909,7 +910,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) } write_lock(&vcpu->kvm->mmu_lock); - for_each_shadow_entry_using_root(vcpu, root_hpa, gva, iterator) { + for_each_shadow_entry_using_root(vcpu, root_hpa, addr, iterator) { level = iterator.level; sptep = iterator.sptep; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 508074e47bc0..b9663623c128 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -798,8 +798,8 @@ void kvm_inject_emulated_page_fault(struct kvm_vcpu *vcpu, */ if ((fault->error_code & PFERR_PRESENT_MASK) && !(fault->error_code & PFERR_RSVD_MASK)) - kvm_mmu_invalidate_gva(vcpu, fault_mmu, fault->address, - fault_mmu->root.hpa); + kvm_mmu_invalidate_addr(vcpu, fault_mmu, fault->address, + fault_mmu->root.hpa); fault_mmu->inject_page_fault(vcpu, fault); } From patchwork Thu Feb 16 15:41:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13143395 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86EA5C636CC for ; Thu, 16 Feb 2023 15:40:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230040AbjBPPk0 (ORCPT ); Thu, 16 Feb 2023 10:40:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41658 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229934AbjBPPkX (ORCPT ); Thu, 16 Feb 2023 10:40:23 -0500 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9CAF442F8; Thu, 16 Feb 2023 07:40:21 -0800 (PST) Received: by mail-pj1-x102d.google.com with SMTP id gd1so2274000pjb.1; Thu, 16 Feb 2023 07:40:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=t/IHRnlveZmpXPbh7NfGCDFcLzaUFNSKaJFAnpbfK4s=; b=Zy+XA6p0t+9jMxNIRNnZ+E9JkpTIEVSfz+F6h+4e2k4kEGWzj5a5ymGwnZi10vB0ua ikL0VWqXrD7snVCGS9palqBGFr0YdjbTtX4VJ+gFvwnoLsKXMDWyoYGy/t9JwKZlWC3u B+KJmCcXRX/W7hMdm268z7I0KT7bizDZPU3U8uaOcqu3JgsbCCMFChQuDWEqi6NoRqqR ZhFDH6r+XK3JhGOvUyLbqiFbtrK6RXt/u2rzdjT6iFxGWcDuTxDorp5oFspL90VyWEIb dzNtQwQ1j10LLdVuPQKL1JgmfNwqvQzmTgBGTWEE69ZgCk4KNOQN3VErZXf6P+Acg1MF +sgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=t/IHRnlveZmpXPbh7NfGCDFcLzaUFNSKaJFAnpbfK4s=; b=kAcDMY6FBamviO/BZfmoYvJsV/u3sDyubBlAPu7EwJoyTr89tI4ayH9s/r0UOcXplc YpgQDcea/c/9v/rVMgYx4Xd9YRcbOxpivrILHeubwl4KMGvQrMCLQd6Dv7xyt/npxxBL WdpBhxi7FhTqdlb1TwnweuiRJYUkyX5cF54mqUN/cERzGgLHwj5bBrdiF1X4Bm8vOZlg SgwD8uASNn+v1Fsq+UXH6JZYwGWzGpHkw+BQLRJwLisnTNapPzdbkwSZUBLj1W67l6AS dEl8Ofg2GZQpAGAUHFe0peIxfJkxG2DX4V8PzBD9+r2oX3KVfXvud0CK5HDJkTFQn2Cy bvQA== X-Gm-Message-State: AO0yUKVoKHqGw9pswDJ91PtYoox5IJ6ZxUrTMWSLFHsO2FVGQoAKDShg Oyhf08sGAlmnhnALoxziVYFal6velt4= X-Google-Smtp-Source: AK7set/estWXJ8U86Q9aRtEr8qUlKEKbZwDhRRfwcjJkiUbCoipRO/C40uGFb/YTyXFdx4V0y2Zgyg== X-Received: by 2002:a17:903:22c9:b0:19a:a6ec:6721 with SMTP id y9-20020a17090322c900b0019aa6ec6721mr7425834plg.16.1676562020969; Thu, 16 Feb 2023 07:40:20 -0800 (PST) Received: from localhost ([198.11.176.14]) by smtp.gmail.com with ESMTPSA id j5-20020a170902c3c500b001992b8cf89bsm1482669plj.16.2023.02.16.07.40.20 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Feb 2023 07:40:20 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V3 02/14] kvm: x86/mmu: Move the check in FNAME(sync_page) as kvm_sync_page_check() Date: Thu, 16 Feb 2023 23:41:08 +0800 Message-Id: <20230216154115.710033-3-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230216154115.710033-1-jiangshanlai@gmail.com> References: <20230216154115.710033-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Prepare to check mmu->sync_page pointer before calling it. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 43 +++++++++++++++++++++++++++++++++- arch/x86/kvm/mmu/paging_tmpl.h | 27 --------------------- 2 files changed, 42 insertions(+), 28 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 91f8e1d1d4cc..ee2837ea18d4 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1914,10 +1914,51 @@ static bool sp_has_gptes(struct kvm_mmu_page *sp) &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)]) \ if ((_sp)->gfn != (_gfn) || !sp_has_gptes(_sp)) {} else +static bool kvm_sync_page_check(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) +{ + union kvm_mmu_page_role root_role = vcpu->arch.mmu->root_role; + + /* + * Ignore various flags when verifying that it's safe to sync a shadow + * page using the current MMU context. + * + * - level: not part of the overall MMU role and will never match as the MMU's + * level tracks the root level + * - access: updated based on the new guest PTE + * - quadrant: not part of the overall MMU role (similar to level) + */ + const union kvm_mmu_page_role sync_role_ign = { + .level = 0xf, + .access = 0x7, + .quadrant = 0x3, + .passthrough = 0x1, + }; + + /* + * Direct pages can never be unsync, and KVM should never attempt to + * sync a shadow page for a different MMU context, e.g. if the role + * differs then the memslot lookup (SMM vs. non-SMM) will be bogus, the + * reserved bits checks will be wrong, etc... + */ + if (WARN_ON_ONCE(sp->role.direct || + (sp->role.word ^ root_role.word) & ~sync_role_ign.word)) + return false; + + return true; +} + +static int __kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) +{ + if (!kvm_sync_page_check(vcpu, sp)) + return -1; + + return vcpu->arch.mmu->sync_page(vcpu, sp); +} + static int kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct list_head *invalid_list) { - int ret = vcpu->arch.mmu->sync_page(vcpu, sp); + int ret = __kvm_sync_page(vcpu, sp); if (ret < 0) kvm_mmu_prepare_zap_page(vcpu->kvm, sp, invalid_list); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index c7b1de064be5..e0aae0a7f646 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -984,38 +984,11 @@ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, */ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) { - union kvm_mmu_page_role root_role = vcpu->arch.mmu->root_role; int i; bool host_writable; gpa_t first_pte_gpa; bool flush = false; - /* - * Ignore various flags when verifying that it's safe to sync a shadow - * page using the current MMU context. - * - * - level: not part of the overall MMU role and will never match as the MMU's - * level tracks the root level - * - access: updated based on the new guest PTE - * - quadrant: not part of the overall MMU role (similar to level) - */ - const union kvm_mmu_page_role sync_role_ign = { - .level = 0xf, - .access = 0x7, - .quadrant = 0x3, - .passthrough = 0x1, - }; - - /* - * Direct pages can never be unsync, and KVM should never attempt to - * sync a shadow page for a different MMU context, e.g. if the role - * differs then the memslot lookup (SMM vs. non-SMM) will be bogus, the - * reserved bits checks will be wrong, etc... - */ - if (WARN_ON_ONCE(sp->role.direct || - (sp->role.word ^ root_role.word) & ~sync_role_ign.word)) - return -1; - first_pte_gpa = FNAME(get_level1_sp_gpa)(sp); for (i = 0; i < SPTE_ENT_PER_PAGE; i++) { From patchwork Thu Feb 16 15:41:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13143396 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5ABEC61DA4 for ; Thu, 16 Feb 2023 15:40:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230096AbjBPPkb (ORCPT ); Thu, 16 Feb 2023 10:40:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230090AbjBPPk3 (ORCPT ); Thu, 16 Feb 2023 10:40:29 -0500 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C4B35355B; Thu, 16 Feb 2023 07:40:26 -0800 (PST) Received: by mail-pj1-x102d.google.com with SMTP id gd1so2274220pjb.1; Thu, 16 Feb 2023 07:40:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5/EYzJowo1zkQ3dnto3B4iHSRvmeCVItrxutmDGogJA=; b=PZqUve7aK7Ip19hPioTTDobjiniWkuChAHocY+YWa++XGHRA9rPf/PZnMBSsT6AhZQ wbYIo8LADa4S5XnZ4MalIN3vlC5DiC8I++b0bqSO0Z3j3VRYBS4xGonfZ11fMUKtKal+ lhYJTUWBatUvG9DPeFR/3GM8xwk4mB9tLWv4XZusueUhdLRjYk+7gaLdFV2MtNGD7Gpw w4Qc6SqvtoFsPMO93KyVLn/paBfWU83M8thR3SkYGbwebhR2+U5Z6/oiKNW/M97+IibK WKmNoWnyDf4TzZM73GeK3yOO6JHSq01neY9HqMpiZRvID8O3EKOv102jTEXqIdC0rPlg rHHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5/EYzJowo1zkQ3dnto3B4iHSRvmeCVItrxutmDGogJA=; b=iYeZUFhOyHkkaMJHsLfyfwi2cW27LTfQRBA0KheGG1wPVHVbQVy/D/O1E0KUaxOqr9 Nv+R7vEF14mbPEDxWRArZGVzjmE9mF8qtUMfd6BV+ZR0n4ikA3gcM2+kF7hzx2C7D0sb lnKYvT0D8rZgZJlMgmVaRTw7iqItwi2FpU/DOZNdS3gDKfDY3PFQVewbSG8kbHsYF4fv KXlLsvnMW/HvujXwiKiPPzapFG/g1XdRk9jHAtfrxj8FKe/cmtd4g+6gcrUmAk93IEB3 3wF79vQQjjcbBFRdqZMc81lIsB4ecT0ixaFyzNDtrfs/6U6QoQFBXjqX+7N9tRZ+xkYH BVEw== X-Gm-Message-State: AO0yUKX2uGqfoxm6Pt3Utq3goyWi/aL9krlw3se6/VnwOl/CRPDBcDxP A+1IvrQD92ShOiyR05OlIT9GqCEzVEM= X-Google-Smtp-Source: AK7set+zsMIBpmxpANGhSQ6A7XCHQKBq8kL+5C/a7tyo/X6WXcYeZQULjRrLo/kF/5uMUTUiO/GZgA== X-Received: by 2002:a17:903:32c6:b0:19a:b7c0:f097 with SMTP id i6-20020a17090332c600b0019ab7c0f097mr7307981plr.57.1676562025438; Thu, 16 Feb 2023 07:40:25 -0800 (PST) Received: from localhost ([198.11.178.15]) by smtp.gmail.com with ESMTPSA id k14-20020a6568ce000000b004ecd14297f2sm1350680pgt.10.2023.02.16.07.40.24 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Feb 2023 07:40:24 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V3 03/14] kvm: x86/mmu: Check mmu->sync_page pointer in kvm_sync_page_check() Date: Thu, 16 Feb 2023 23:41:09 +0800 Message-Id: <20230216154115.710033-4-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230216154115.710033-1-jiangshanlai@gmail.com> References: <20230216154115.710033-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Check the pointer before calling it to catch any possible mistake. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ee2837ea18d4..69ab0d1bb0ec 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1940,7 +1940,7 @@ static bool kvm_sync_page_check(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) * differs then the memslot lookup (SMM vs. non-SMM) will be bogus, the * reserved bits checks will be wrong, etc... */ - if (WARN_ON_ONCE(sp->role.direct || + if (WARN_ON_ONCE(sp->role.direct || !vcpu->arch.mmu->sync_page || (sp->role.word ^ root_role.word) & ~sync_role_ign.word)) return false; From patchwork Thu Feb 16 15:41:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13143397 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6BD4C61DA4 for ; Thu, 16 Feb 2023 15:40:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230280AbjBPPkn (ORCPT ); Thu, 16 Feb 2023 10:40:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41898 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230153AbjBPPkf (ORCPT ); Thu, 16 Feb 2023 10:40:35 -0500 Received: from mail-pg1-x534.google.com (mail-pg1-x534.google.com [IPv6:2607:f8b0:4864:20::534]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97CD8564BB; Thu, 16 Feb 2023 07:40:30 -0800 (PST) Received: by mail-pg1-x534.google.com with SMTP id e1so1519894pgg.9; Thu, 16 Feb 2023 07:40:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+P2V/rDJHh3791vsFkos1nlRJ7X5Lj2yIGz2MSJGxnc=; b=lP/cnq8Y0MQbnwJSe62IO03UHEUfyoey/rrq8diUr3a1B80t2Te/gLBLDfwqeBZcwO fVTeoTmAboWBd1jDUrzqbU9g9mlLD4/ZwY1xn0ubM4oAp15rg1MKZL4F+Ojt2Va4kJHX QBQrFyEb5pSSCnI6EolRND/PdXmlP4bNGCd0Eqtb0sg1yuoTHY0hTbS4edZWJAJS7RGE CA7GBlv+ytVTYkQXx/9V/OEhFQ9yeVMGakh7Ts+8CUlMkQ8V535dC761h5vWV+BlHPku lJeq1fX3WAVK6RRKlpsFlPv7ByO5lBhCgFd7XOlPwN1Ai4Br1Oc/hinkP/sXdNCxI3z9 0mnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+P2V/rDJHh3791vsFkos1nlRJ7X5Lj2yIGz2MSJGxnc=; b=TmqlMSHHya7EZs6dqAAFdbbeG26BWzE+hGeMfhwHwsE1POzerMLK1z1r2tzKPkY7/r F1cqliKbLct+3BIV3taaMFKmWfjEJIwm2JDFRH4yOsKqJMX9579ZS/4ZHOV/AUnYw6wo ReeYI7eSChKfsbffYVtwSxmpMD1hvfCnu6sAVe9q7/X5INcS5hOCXo3/hodNrgfJFpDG gBO8fRMH1a7ak09qSkF1ZQRYYD/mjlwYe0GoxhSL6GZPpXAyCpCQAYlfwR/LJiaUSgKB T8849tpt8LIFZHpoxeSlula7mjYZKAL8g6a1of+GNud8tQmUdHA917h+Z1hIHJumHpyK CN9Q== X-Gm-Message-State: AO0yUKX1Fw9VyfKfduxLB9oB0CZ6a9cv7Oz+CXNQiak3O0hMH+7fLF8d ZGSNwiEio4GIPmL6pPDDF6hpfx5ulh0= X-Google-Smtp-Source: AK7set9uYPFO0kw2jfhmKM+AGVZL6KU14Eu0HeelFCJFLWNeUhO3snG1u9Yut6QHFZt08f4tAsYkZw== X-Received: by 2002:a62:7b85:0:b0:5a8:b649:99d4 with SMTP id w127-20020a627b85000000b005a8b64999d4mr5218013pfc.7.1676562029757; Thu, 16 Feb 2023 07:40:29 -0800 (PST) Received: from localhost ([47.254.32.37]) by smtp.gmail.com with ESMTPSA id w2-20020aa78582000000b005a8cc32b23csm276684pfn.20.2023.02.16.07.40.28 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Feb 2023 07:40:29 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V3 04/14] kvm: x86/mmu: Set mmu->sync_page as NULL for direct paging Date: Thu, 16 Feb 2023 23:41:10 +0800 Message-Id: <20230216154115.710033-5-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230216154115.710033-1-jiangshanlai@gmail.com> References: <20230216154115.710033-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan mmu->sync_page for direct paging is never called. And both mmu->sync_page and mm->invlpg only make sense in shadow paging. Setting mmu->sync_page as NULL for direct paging makes it consistent with mm->invlpg which is set NULL for the case. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 69ab0d1bb0ec..f50f82bb3662 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1789,12 +1789,6 @@ static void mark_unsync(u64 *spte) kvm_mmu_mark_parents_unsync(sp); } -static int nonpaging_sync_page(struct kvm_vcpu *vcpu, - struct kvm_mmu_page *sp) -{ - return -1; -} - #define KVM_PAGE_ARRAY_NR 16 struct kvm_mmu_pages { @@ -4510,7 +4504,7 @@ static void nonpaging_init_context(struct kvm_mmu *context) { context->page_fault = nonpaging_page_fault; context->gva_to_gpa = nonpaging_gva_to_gpa; - context->sync_page = nonpaging_sync_page; + context->sync_page = NULL; context->invlpg = NULL; } @@ -5198,7 +5192,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, context->cpu_role.as_u64 = cpu_role.as_u64; context->root_role.word = root_role.word; context->page_fault = kvm_tdp_page_fault; - context->sync_page = nonpaging_sync_page; + context->sync_page = NULL; context->invlpg = NULL; context->get_guest_pgd = get_cr3; context->get_pdptr = kvm_pdptr_read; From patchwork Thu Feb 16 15:41:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13143398 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 670BCC636CC for ; Thu, 16 Feb 2023 15:40:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230153AbjBPPku (ORCPT ); Thu, 16 Feb 2023 10:40:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42132 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229934AbjBPPkr (ORCPT ); Thu, 16 Feb 2023 10:40:47 -0500 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F165155E4F; Thu, 16 Feb 2023 07:40:34 -0800 (PST) Received: by mail-pl1-x62f.google.com with SMTP id be8so2405911plb.7; Thu, 16 Feb 2023 07:40:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tswCNRF4PJXn2AcdPflQ9tzBOQNHiMw8By26m9aOmgc=; b=GZ7mCdSJ36N/eYmbu2fpeXXACqfefgjdCsoNWOHbF88B3CS2dGak856gIY0WlhueP/ H5OhaZyx0VYKNrlaza3/5YxVgnSGZLYas37e/Lr5aRCNBRx9mxxt6qV2W1Bca245RLcb jBZ1efPApB3py1T3BzAkOV3rR7B2MrCdm7WUUpfN4CJk8GU4ChtNY/3eMcgAmRWHPuXe JSYUtGhrkd5ezMqPG40EkNdEIaoYoYdW6Je8Raoj68bDkJnU6RQSrtOwlwYOy6vLmsxT WuypL0ddKch+fv62V574Uk0UGvAnXkMjWl8qL3rQbqCYlhNwMKsA1m+07xcvK9v41r18 nOyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tswCNRF4PJXn2AcdPflQ9tzBOQNHiMw8By26m9aOmgc=; b=tfKqvlPnDDYol6HuHfZbnWyWo3FtAnslbYJkSa151q6x8dJ3rVD/ulf43hzs4SZje5 GTx3TWq+eKyWsp6BpvE9nA7ydPTp9Y7hbFNdrx9LbieCvUqAbDx23oqglogEJ80KxzUY LZkbDiiUw6+UESEHTJFkGLFB31XfiGVJfP6J1FNHDfuf/FpfyJ6wasIq9S34pJcJGx/8 OeB5OQPKX3LDgS4Povf3m2tLhLHzo1RdgliKTXqtPRQ8l3UCrRXXNPmhIZJ6rCqtJNNb OUbA8XOpAUhtCgNClEMQAL7gz0T1kTcnRauffXFOGL4PtJ+o8MQ0AjZ6OySsWmMPnt3M fHMw== X-Gm-Message-State: AO0yUKWbUkAfRIhxB+vnaSnsMG3ahUE0Vbe9rzq2nqgOC2rbjHB2HI4Y rpItC60Qd2bU1P/EQvEnd3SEdMDhE4g= X-Google-Smtp-Source: AK7set+PutRRY6v8CCKBJMUWwCwjVrR7lRXM7XZ5QoG1D3+at7OGK9t8fOWGcCmXBda6O5EJugqJ7w== X-Received: by 2002:a17:90b:180c:b0:234:b23:eadb with SMTP id lw12-20020a17090b180c00b002340b23eadbmr7368122pjb.41.1676562034174; Thu, 16 Feb 2023 07:40:34 -0800 (PST) Received: from localhost ([47.254.32.37]) by smtp.gmail.com with ESMTPSA id o98-20020a17090a0a6b00b0021904307a53sm1394429pjo.19.2023.02.16.07.40.33 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Feb 2023 07:40:33 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V3 05/14] kvm: x86/mmu: Move the code out of FNAME(sync_page)'s loop body into mmu.c Date: Thu, 16 Feb 2023 23:41:11 +0800 Message-Id: <20230216154115.710033-6-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230216154115.710033-1-jiangshanlai@gmail.com> References: <20230216154115.710033-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Rename mmu->sync_page to mmu->sync_spte and move the code out of FNAME(sync_page)'s loop body into mmu.c. No functionalities change intended. Signed-off-by: Lai Jiangshan --- arch/x86/include/asm/kvm_host.h | 4 +- arch/x86/kvm/mmu/mmu.c | 34 ++++++++-- arch/x86/kvm/mmu/paging_tmpl.h | 114 +++++++++++++------------------- 3 files changed, 76 insertions(+), 76 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 5466f4152c67..b71b52fdb5ee 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -441,8 +441,8 @@ struct kvm_mmu { gpa_t (*gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, gpa_t gva_or_gpa, u64 access, struct x86_exception *exception); - int (*sync_page)(struct kvm_vcpu *vcpu, - struct kvm_mmu_page *sp); + int (*sync_spte)(struct kvm_vcpu *vcpu, + struct kvm_mmu_page *sp, int i); void (*invlpg)(struct kvm_vcpu *vcpu, u64 addr, hpa_t root_hpa); struct kvm_mmu_root_info root; union kvm_cpu_role cpu_role; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f50f82bb3662..a8231b73ad4d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1934,7 +1934,7 @@ static bool kvm_sync_page_check(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) * differs then the memslot lookup (SMM vs. non-SMM) will be bogus, the * reserved bits checks will be wrong, etc... */ - if (WARN_ON_ONCE(sp->role.direct || !vcpu->arch.mmu->sync_page || + if (WARN_ON_ONCE(sp->role.direct || !vcpu->arch.mmu->sync_spte || (sp->role.word ^ root_role.word) & ~sync_role_ign.word)) return false; @@ -1943,10 +1943,30 @@ static bool kvm_sync_page_check(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) static int __kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) { + int flush = 0; + int i; + if (!kvm_sync_page_check(vcpu, sp)) return -1; - return vcpu->arch.mmu->sync_page(vcpu, sp); + for (i = 0; i < SPTE_ENT_PER_PAGE; i++) { + int ret = vcpu->arch.mmu->sync_spte(vcpu, sp, i); + + if (ret < -1) + return -1; + flush |= ret; + } + + /* + * Note, any flush is purely for KVM's correctness, e.g. when dropping + * an existing SPTE or clearing W/A/D bits to ensure an mmu_notifier + * unmap or dirty logging event doesn't fail to flush. The guest is + * responsible for flushing the TLB to ensure any changes in protection + * bits are recognized, i.e. until the guest flushes or page faults on + * a relevant address, KVM is architecturally allowed to let vCPUs use + * cached translations with the old protection bits. + */ + return flush; } static int kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, @@ -4504,7 +4524,7 @@ static void nonpaging_init_context(struct kvm_mmu *context) { context->page_fault = nonpaging_page_fault; context->gva_to_gpa = nonpaging_gva_to_gpa; - context->sync_page = NULL; + context->sync_spte = NULL; context->invlpg = NULL; } @@ -5095,7 +5115,7 @@ static void paging64_init_context(struct kvm_mmu *context) { context->page_fault = paging64_page_fault; context->gva_to_gpa = paging64_gva_to_gpa; - context->sync_page = paging64_sync_page; + context->sync_spte = paging64_sync_spte; context->invlpg = paging64_invlpg; } @@ -5103,7 +5123,7 @@ static void paging32_init_context(struct kvm_mmu *context) { context->page_fault = paging32_page_fault; context->gva_to_gpa = paging32_gva_to_gpa; - context->sync_page = paging32_sync_page; + context->sync_spte = paging32_sync_spte; context->invlpg = paging32_invlpg; } @@ -5192,7 +5212,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, context->cpu_role.as_u64 = cpu_role.as_u64; context->root_role.word = root_role.word; context->page_fault = kvm_tdp_page_fault; - context->sync_page = NULL; + context->sync_spte = NULL; context->invlpg = NULL; context->get_guest_pgd = get_cr3; context->get_pdptr = kvm_pdptr_read; @@ -5324,7 +5344,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, context->page_fault = ept_page_fault; context->gva_to_gpa = ept_gva_to_gpa; - context->sync_page = ept_sync_page; + context->sync_spte = ept_sync_spte; context->invlpg = ept_invlpg; update_permission_bitmask(context, true); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index e0aae0a7f646..0ea938276ba8 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -978,87 +978,67 @@ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, * can't change unless all sptes pointing to it are nuked first. * * Returns - * < 0: the sp should be zapped - * 0: the sp is synced and no tlb flushing is required - * > 0: the sp is synced and tlb flushing is required + * < 0: failed to sync spte + * 0: the spte is synced and no tlb flushing is required + * > 0: the spte is synced and tlb flushing is required */ -static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) +static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, int i) { - int i; bool host_writable; gpa_t first_pte_gpa; - bool flush = false; - - first_pte_gpa = FNAME(get_level1_sp_gpa)(sp); - - for (i = 0; i < SPTE_ENT_PER_PAGE; i++) { - u64 *sptep, spte; - struct kvm_memory_slot *slot; - unsigned pte_access; - pt_element_t gpte; - gpa_t pte_gpa; - gfn_t gfn; - - if (!sp->spt[i]) - continue; + u64 *sptep, spte; + struct kvm_memory_slot *slot; + unsigned pte_access; + pt_element_t gpte; + gpa_t pte_gpa; + gfn_t gfn; - pte_gpa = first_pte_gpa + i * sizeof(pt_element_t); + if (!sp->spt[i]) + return 0; - if (kvm_vcpu_read_guest_atomic(vcpu, pte_gpa, &gpte, - sizeof(pt_element_t))) - return -1; + first_pte_gpa = FNAME(get_level1_sp_gpa)(sp); + pte_gpa = first_pte_gpa + i * sizeof(pt_element_t); - if (FNAME(prefetch_invalid_gpte)(vcpu, sp, &sp->spt[i], gpte)) { - flush = true; - continue; - } + if (kvm_vcpu_read_guest_atomic(vcpu, pte_gpa, &gpte, + sizeof(pt_element_t))) + return -1; - gfn = gpte_to_gfn(gpte); - pte_access = sp->role.access; - pte_access &= FNAME(gpte_access)(gpte); - FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte); + if (FNAME(prefetch_invalid_gpte)(vcpu, sp, &sp->spt[i], gpte)) + return 1; - if (sync_mmio_spte(vcpu, &sp->spt[i], gfn, pte_access)) - continue; + gfn = gpte_to_gfn(gpte); + pte_access = sp->role.access; + pte_access &= FNAME(gpte_access)(gpte); + FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte); - /* - * Drop the SPTE if the new protections would result in a RWX=0 - * SPTE or if the gfn is changing. The RWX=0 case only affects - * EPT with execute-only support, i.e. EPT without an effective - * "present" bit, as all other paging modes will create a - * read-only SPTE if pte_access is zero. - */ - if ((!pte_access && !shadow_present_mask) || - gfn != kvm_mmu_page_get_gfn(sp, i)) { - drop_spte(vcpu->kvm, &sp->spt[i]); - flush = true; - continue; - } + if (sync_mmio_spte(vcpu, &sp->spt[i], gfn, pte_access)) + return 0; - /* Update the shadowed access bits in case they changed. */ - kvm_mmu_page_set_access(sp, i, pte_access); + /* + * Drop the SPTE if the new protections would result in a RWX=0 + * SPTE or if the gfn is changing. The RWX=0 case only affects + * EPT with execute-only support, i.e. EPT without an effective + * "present" bit, as all other paging modes will create a + * read-only SPTE if pte_access is zero. + */ + if ((!pte_access && !shadow_present_mask) || + gfn != kvm_mmu_page_get_gfn(sp, i)) { + drop_spte(vcpu->kvm, &sp->spt[i]); + return 1; + } - sptep = &sp->spt[i]; - spte = *sptep; - host_writable = spte & shadow_host_writable_mask; - slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - make_spte(vcpu, sp, slot, pte_access, gfn, - spte_to_pfn(spte), spte, true, false, - host_writable, &spte); + /* Update the shadowed access bits in case they changed. */ + kvm_mmu_page_set_access(sp, i, pte_access); - flush |= mmu_spte_update(sptep, spte); - } + sptep = &sp->spt[i]; + spte = *sptep; + host_writable = spte & shadow_host_writable_mask; + slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); + make_spte(vcpu, sp, slot, pte_access, gfn, + spte_to_pfn(spte), spte, true, false, + host_writable, &spte); - /* - * Note, any flush is purely for KVM's correctness, e.g. when dropping - * an existing SPTE or clearing W/A/D bits to ensure an mmu_notifier - * unmap or dirty logging event doesn't fail to flush. The guest is - * responsible for flushing the TLB to ensure any changes in protection - * bits are recognized, i.e. until the guest flushes or page faults on - * a relevant address, KVM is architecturally allowed to let vCPUs use - * cached translations with the old protection bits. - */ - return flush; + return mmu_spte_update(sptep, spte); } #undef pt_element_t From patchwork Thu Feb 16 15:41:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13143399 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A0ECC636CC for ; Thu, 16 Feb 2023 15:41:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230329AbjBPPlA (ORCPT ); Thu, 16 Feb 2023 10:41:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229934AbjBPPk6 (ORCPT ); Thu, 16 Feb 2023 10:40:58 -0500 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6B61D4D635; Thu, 16 Feb 2023 07:40:39 -0800 (PST) Received: by mail-pl1-x635.google.com with SMTP id be8so2406128plb.7; Thu, 16 Feb 2023 07:40:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=h/vXuc8fVl6VsZtcth3IfX6io45AzxO13DY0wosrCBk=; b=BZElVAoAYUOzZeZyaQa4goNiFad0IuQGI6yBFjGlwg744d4kdIaMb91+qd8j+1GSR9 gTT1FNiLxZWC/xOWNRIVhOEN5kxO8+bQKgPJv5+ee15jUf+unBJL6Nz77vvzx2EqM9IB kMka1WwJzdWs/B7wy8oslODtcsqYqk/UoQ2cEL8OlvRZNxIlWFuJAaPsG0cCvGntzzV2 b7R+Zn0X5YCokZ4MmXwJfIVoMYNsk6XnHHCZSlop3+LePB92EodhniL5VaZt8BpEFo8n SIUDwlRy/8ode1IGyYIfOAZSw5ohnMF3b2oVlgL66ZgZFv9DzMy1vIRUzcCjjeeiqFYS MNSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=h/vXuc8fVl6VsZtcth3IfX6io45AzxO13DY0wosrCBk=; b=HoQ4DpBYfi1H503vr6naAMKosdAgHIr3DN/QHSfYZu/3WPnvq6VeSSZp/ci5CFrH0J kpDFKBIfPFJNslthaK/3Ki7KOhK6EVyYjVAo/0DgfkN8E4Lgiefxt9XZWgeb1wps3yda MZd+wBrQP2XJFSQW0jnno+TVZ2UsAATuBFTKYnR7fIGtydhBLr1h0wgACBeL2fyGmImH Ynqji7Ase0scTRBhe89NFi7L/1hJBqQvkpNVz9gepHWC4DaLU35TIwl3BepEvFCKJ9s1 lvbM47ACGa7RsGU5UZ25laijuy26Vey0jUDJc2+PfHqXCs1o6vPW6hJiM6FyPvIROrnl LTwg== X-Gm-Message-State: AO0yUKUiv+WP2XEKThcq9ts+Vfg5cOuPhbUGUiH/SJ/ZdL31AKcMzRqX SNYRhzZctPxCHI5ngpOlNBehiAX2m2Q= X-Google-Smtp-Source: AK7set+wwEW3TstQ4wRN7vve+yhETr1EVdUD7bD0Ci6mMcUPcg1k7pJ1J/Gmy5aGF1/pzsPNnhCkjg== X-Received: by 2002:a17:902:f54f:b0:199:26b1:17b3 with SMTP id h15-20020a170902f54f00b0019926b117b3mr7103627plf.28.1676562038618; Thu, 16 Feb 2023 07:40:38 -0800 (PST) Received: from localhost ([47.89.225.180]) by smtp.gmail.com with ESMTPSA id t2-20020a170902bc4200b0019a593e45f1sm1447545plz.261.2023.02.16.07.40.37 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Feb 2023 07:40:38 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V3 06/14] kvm: x86/mmu: Reduce the update to the spte in FNAME(sync_spte) Date: Thu, 16 Feb 2023 23:41:12 +0800 Message-Id: <20230216154115.710033-7-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230216154115.710033-1-jiangshanlai@gmail.com> References: <20230216154115.710033-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Sometimes when the guest updates its pagetable, it adds only new gptes to it without changing any existed one, so there is no point to update the sptes for these existed gptes. Also when the sptes for these unchanged gptes are updated, the AD bits are also removed since make_spte() is called with prefetch=true which might result unneeded TLB flushing. Just do nothing if the gpte's permissions are unchanged. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/paging_tmpl.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 0ea938276ba8..7db167876cd7 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -1026,6 +1026,11 @@ static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, int drop_spte(vcpu->kvm, &sp->spt[i]); return 1; } + /* + * Do nothing if the permissions are unchanged. + */ + if (kvm_mmu_page_get_access(sp, i) == pte_access) + return 0; /* Update the shadowed access bits in case they changed. */ kvm_mmu_page_set_access(sp, i, pte_access); From patchwork Thu Feb 16 15:41:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13143400 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD0EBC636CC for ; Thu, 16 Feb 2023 15:41:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230356AbjBPPlK (ORCPT ); Thu, 16 Feb 2023 10:41:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43144 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229934AbjBPPlJ (ORCPT ); Thu, 16 Feb 2023 10:41:09 -0500 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D84914DBFB; Thu, 16 Feb 2023 07:40:43 -0800 (PST) Received: by mail-pl1-x634.google.com with SMTP id b5so2414900plz.5; Thu, 16 Feb 2023 07:40:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gfm6hlQeXTISOsTgRA62CTDZya7dFcn3HbUxfY7737E=; b=URfWnp43tufTZid8AK6qghhMiaZIQ9YrupQ6tltWOeeO8DhfMUwq7COxrpjZ/ChevL 49gTFTfy/eFi53m9Tv3KeUYIEOy3/8o5enXBzSz1imE3siFoUfZ4Psw0RVLVNpxM2405 FTsQaVruQf2vDlbqJuCDKxBawFRWsZeeEJdD1svcv23PbyYCigqJ/txLS0QXHM37XNhm gBBTn3y4NLhZNhB49XiYfYaN2AaH/O2Wt+9pgl7nKG3pH/Ey9BqLUuFJy0p56knyYOtQ 13ufmZVOShbQll3rKybrXII92sV+X4wQHDFBabCId5pq7HemqDdB/b4DnyeKh5h3xzwW 2NFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gfm6hlQeXTISOsTgRA62CTDZya7dFcn3HbUxfY7737E=; b=J/1EGm6h+2tnalIORx5WYlUUgagEuSW5e3Oouh3HOJSwtfDR8X7y3XTOiosaHROKA4 6UnavlUovL4b7mXIoZ+Nc0VY0dRqMxeBrKmoCkAlM3VmjsVpfp82QeloboFzX4e7fbzx bi9nO25NQEYzc2adp4iqjteaR+tdNNGe68/Ysr24Lk3gxgZj/niMZDj9ovo2NXlzRjMH hirdyE4VPZHXJlTE1444K54OSVwzcxpBpsappmDP7bPSNzcSS9JpsT5UBrbTBVU715K4 cUCmyMbsKX3G8BcXLgnNevjSa7e1irz/YOs7n2pHyHqiGfTj3NkdR2KzPNh+VuwEM+Og yDlw== X-Gm-Message-State: AO0yUKW8yAhUgZYEhmdhArz4UPoLBfxBYQZ0EZlLK/fsFRZqY5gFdFXN 4MZ92C4O59GQR6NaAnlONN8DPyfu/kU= X-Google-Smtp-Source: AK7set9c+LdcGEg6giWBijxKT/G23TUV9do0CD/LSQNby01nfWF3W+Bax9wh+vmFN0VPk8VIObGLrQ== X-Received: by 2002:a05:6a20:9424:b0:c0:cee4:f77b with SMTP id hl36-20020a056a20942400b000c0cee4f77bmr4578117pzb.18.1676562043051; Thu, 16 Feb 2023 07:40:43 -0800 (PST) Received: from localhost ([47.254.32.37]) by smtp.gmail.com with ESMTPSA id e24-20020aa78c58000000b00581c741f95csm1458763pfd.46.2023.02.16.07.40.42 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Feb 2023 07:40:42 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V3 07/14] KVM: x86/mmu: Sanity check input to kvm_mmu_free_roots() Date: Thu, 16 Feb 2023 23:41:13 +0800 Message-Id: <20230216154115.710033-8-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230216154115.710033-1-jiangshanlai@gmail.com> References: <20230216154115.710033-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson Tweak KVM_MMU_ROOTS_ALL to precisely cover all current+previous root flags, and add a sanity in kvm_mmu_free_roots() to verify that the set of roots to free doesn't stray outside KVM_MMU_ROOTS_ALL. Signed-off-by: Sean Christopherson Signed-off-by: Lai Jiangshan --- arch/x86/include/asm/kvm_host.h | 8 ++++---- arch/x86/kvm/mmu/mmu.c | 2 ++ 2 files changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index b71b52fdb5ee..5bd91c49c8b3 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -422,6 +422,10 @@ struct kvm_mmu_root_info { #define KVM_MMU_NUM_PREV_ROOTS 3 +#define KVM_MMU_ROOT_CURRENT BIT(0) +#define KVM_MMU_ROOT_PREVIOUS(i) BIT(1+i) +#define KVM_MMU_ROOTS_ALL (BIT(1 + KVM_MMU_NUM_PREV_ROOTS) - 1) + #define KVM_HAVE_MMU_RWLOCK struct kvm_mmu_page; @@ -1978,10 +1982,6 @@ static inline int __kvm_irq_line_state(unsigned long *irq_state, return !!(*irq_state); } -#define KVM_MMU_ROOT_CURRENT BIT(0) -#define KVM_MMU_ROOT_PREVIOUS(i) BIT(1+i) -#define KVM_MMU_ROOTS_ALL (~0UL) - int kvm_pic_set_irq(struct kvm_pic *pic, int irq, int irq_source_id, int level); void kvm_pic_clear_all(struct kvm_pic *pic, int irq_source_id); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a8231b73ad4d..a4793cb8d64a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3568,6 +3568,8 @@ void kvm_mmu_free_roots(struct kvm *kvm, struct kvm_mmu *mmu, LIST_HEAD(invalid_list); bool free_active_root; + WARN_ON_ONCE(roots_to_free & ~KVM_MMU_ROOTS_ALL); + BUILD_BUG_ON(KVM_MMU_NUM_PREV_ROOTS >= BITS_PER_LONG); /* Before acquiring the MMU lock, see if we need to do any real work. */ From patchwork Thu Feb 16 15:41:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13143401 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A8DFC636CC for ; Thu, 16 Feb 2023 15:41:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229934AbjBPPlT (ORCPT ); Thu, 16 Feb 2023 10:41:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230338AbjBPPlR (ORCPT ); Thu, 16 Feb 2023 10:41:17 -0500 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5CF9E252A2; Thu, 16 Feb 2023 07:40:49 -0800 (PST) Received: by mail-pl1-x633.google.com with SMTP id e17so2388761plg.12; Thu, 16 Feb 2023 07:40:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SfxDB9XIy9b9CdyEXIgtoa+bLANGuYHYd7M+YJTBMrw=; b=DgkbZ9VX/h5X4KCffrWd3OJVGCVTEFnNwfAYl9NmI1P0PNmJ4b8fYOGzuJQp/wHTbS 9x1TeieOFlQuNFfjr/894Bprz+v711lmUtcoEU7+C1TZFbnKf8PYbUP4C6nwXHwaYfVd 2hQ6jFjgZUCCXS5ciVs6s2R0eaOTLTe+2KodY7psAydHpW2V0DKf4bmdKSn95m0lD1Fr xGlG+u7zMgfvRdLbBF5DfqA7sfyyezTD5uZdzqu4QHOWFOCGJBahwzp9kmpFAdXcKeB0 /UxYvhmsyT2xHo4gFrH68hJddWSyp5MH57GD/HuA1QpeI6Gs1Z3k6QYiBgV/o8ag3FoK pJag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SfxDB9XIy9b9CdyEXIgtoa+bLANGuYHYd7M+YJTBMrw=; b=Q/bhnOshwHUIheQyEm255loa0CjnF3kRhvTIWnDb98/qmepsM2cG7OAS5IZCOCKvao q9eoyWuZYkEcgbQxPZHmj2oCs6EvFXuIgDDue5nFzaQI2kYOQiTWJtlFlKSd043eqRcL XsRZNjONkarloAxSoJYZDNOTFp86umBO1ri+DjChRUjTvq7snLi9/d+XcLk7DWE7mivC RQLxXOI78nAof4QDgRgD9UoWAU1NXVtkVbB3H+tmY+jSbY5d/89zoH6Pd+cFncYKDDhL 3eBgnTGRW8LHloqDuFw+amdRVeLgkjNYXsRQw06bJc5rmLhj4pKe0GY2zIDtuE+Dk1Ww FObQ== X-Gm-Message-State: AO0yUKWSDeafdDUSnEwhnJHDURO/VUATC+CevAmiD+oHaJyafinWyTvZ R2X/EepE7drLcScQNbzQ5IfQW+GwJBg= X-Google-Smtp-Source: AK7set9tPQL1kxgqAY7GmVbUK5b1lBVU1kXEmv67SphafkvIWba1O0+AXile01WeniVOi8ZLjhOSJA== X-Received: by 2002:a17:903:32c3:b0:19a:9686:ea85 with SMTP id i3-20020a17090332c300b0019a9686ea85mr7963755plr.17.1676562047478; Thu, 16 Feb 2023 07:40:47 -0800 (PST) Received: from localhost ([198.11.178.15]) by smtp.gmail.com with ESMTPSA id j4-20020a170902c3c400b00196519d8647sm1491463plj.4.2023.02.16.07.40.46 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Feb 2023 07:40:46 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V3 08/14] kvm: x86/mmu: Use KVM_MMU_ROOT_XXX for kvm_mmu_invalidate_addr() Date: Thu, 16 Feb 2023 23:41:14 +0800 Message-Id: <20230216154115.710033-9-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230216154115.710033-1-jiangshanlai@gmail.com> References: <20230216154115.710033-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan The @root_hpa for kvm_mmu_invalidate_addr() is called with @mmu->root.hpa or INVALID_PAGE where @mmu->root.hpa is to invalidate gva for the current root (the same meaning as KVM_MMU_ROOT_CURRENT) and INVALID_PAGE is to invalidate gva for all roots (the same meaning as KVM_MMU_ROOTS_ALL). Change the argument type of kvm_mmu_invalidate_addr() and use KVM_MMU_ROOT_XXX instead so that we can reuse the function for kvm_mmu_invpcid_gva() and nested_ept_invalidate_addr() for invalidating gva for different set of roots. No fuctionalities changed. Signed-off-by: Lai Jiangshan --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/mmu/mmu.c | 39 +++++++++++++++++---------------- arch/x86/kvm/x86.c | 2 +- 3 files changed, 22 insertions(+), 21 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 5bd91c49c8b3..cce4243d6688 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2026,7 +2026,7 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 error_code, void *insn, int insn_len); void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva); void kvm_mmu_invalidate_addr(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, - u64 addr, hpa_t root_hpa); + u64 addr, unsigned long roots); void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid); void kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t new_pgd); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a4793cb8d64a..9f261e444a32 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5764,10 +5764,12 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err EXPORT_SYMBOL_GPL(kvm_mmu_page_fault); void kvm_mmu_invalidate_addr(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, - u64 addr, hpa_t root_hpa) + u64 addr, unsigned long roots) { int i; + WARN_ON_ONCE(roots & ~KVM_MMU_ROOTS_ALL); + /* It's actually a GPA for vcpu->arch.guest_mmu. */ if (mmu != &vcpu->arch.guest_mmu) { /* INVLPG on a non-canonical address is a NOP according to the SDM. */ @@ -5780,31 +5782,30 @@ void kvm_mmu_invalidate_addr(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, if (!mmu->invlpg) return; - if (root_hpa == INVALID_PAGE) { + if (roots & KVM_MMU_ROOT_CURRENT) mmu->invlpg(vcpu, addr, mmu->root.hpa); - /* - * INVLPG is required to invalidate any global mappings for the VA, - * irrespective of PCID. Since it would take us roughly similar amount - * of work to determine whether any of the prev_root mappings of the VA - * is marked global, or to just sync it blindly, so we might as well - * just always sync it. - * - * Mappings not reachable via the current cr3 or the prev_roots will be - * synced when switching to that cr3, so nothing needs to be done here - * for them. - */ - for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) - if (VALID_PAGE(mmu->prev_roots[i].hpa)) - mmu->invlpg(vcpu, addr, mmu->prev_roots[i].hpa); - } else { - mmu->invlpg(vcpu, addr, root_hpa); + for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) { + if ((roots & KVM_MMU_ROOT_PREVIOUS(i)) && + VALID_PAGE(mmu->prev_roots[i].hpa)) + mmu->invlpg(vcpu, addr, mmu->prev_roots[i].hpa); } } void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva) { - kvm_mmu_invalidate_addr(vcpu, vcpu->arch.walk_mmu, gva, INVALID_PAGE); + /* + * INVLPG is required to invalidate any global mappings for the VA, + * irrespective of PCID. Since it would take us roughly similar amount + * of work to determine whether any of the prev_root mappings of the VA + * is marked global, or to just sync it blindly, so we might as well + * just always sync it. + * + * Mappings not reachable via the current cr3 or the prev_roots will be + * synced when switching to that cr3, so nothing needs to be done here + * for them. + */ + kvm_mmu_invalidate_addr(vcpu, vcpu->arch.walk_mmu, gva, KVM_MMU_ROOTS_ALL); ++vcpu->stat.invlpg; } EXPORT_SYMBOL_GPL(kvm_mmu_invlpg); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index b9663623c128..37958763ae2f 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -799,7 +799,7 @@ void kvm_inject_emulated_page_fault(struct kvm_vcpu *vcpu, if ((fault->error_code & PFERR_PRESENT_MASK) && !(fault->error_code & PFERR_RSVD_MASK)) kvm_mmu_invalidate_addr(vcpu, fault_mmu, fault->address, - fault_mmu->root.hpa); + KVM_MMU_ROOT_CURRENT); fault_mmu->inject_page_fault(vcpu, fault); } From patchwork Thu Feb 16 15:41:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13143402 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94216C636CC for ; Thu, 16 Feb 2023 15:41:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229510AbjBPPlW (ORCPT ); Thu, 16 Feb 2023 10:41:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43390 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230338AbjBPPlU (ORCPT ); Thu, 16 Feb 2023 10:41:20 -0500 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5DD4A59B4C; Thu, 16 Feb 2023 07:40:53 -0800 (PST) Received: by mail-pl1-x636.google.com with SMTP id d8so2389808plr.10; Thu, 16 Feb 2023 07:40:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=dIz1dd81kBqa+9U0EZQc5mAaI8eBbOT1aGkH7E8Ns+A=; b=LeQ0PkaPg1S/RJKzvxMWzRoPm2TilEGhUb05Y8tNiKntvJqcRwrzdR8FlLw6lnR6NS iBJ+q5ebLXCp05ZbA6P76F7cMmKbyOh26BV/SPcQJSrwMO7W4XwGNWRjBrGK/zMO6amC Z0B48T7TIV3tpYbLBOnTSGCn6oremsRZiuN8zSkvKNXconJfooJPHn0hiwljxuTyIj4x AoTcRhB7xn0mvK1Wg1/qKk7C8eoqCeWOnBzlo5C8DlW22frTOQkgsVIgyV78bV50WRvG G1ex2WWFEoyUhJ6Qov6tA7wK8DV93OBHodz/w1Z59rahiilgQW3QxpjiFH1AZfBMC1rW MWrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dIz1dd81kBqa+9U0EZQc5mAaI8eBbOT1aGkH7E8Ns+A=; b=s6g67vgLt6/Y3/1gpUGuuxxoqNP3ydsDFUqoegenT7eMJRN2fKsjTPCujWKVhWyG+B GPrv7NZdq0EcV0HbuCK7kR24BXelH3IxOulZvbpXPIkWCTl8RfqOFL4rvSxgShggjXpJ TD5j+06H4M4zAyyvvcJLEiLxyT0DoPYemHZjWw2sWD7W24gCPotVs3kb3FuCVq2ynqIz YK6kMZlAFr5oLR4ypzMRkn+vpmkLz8l6YMVeuGMLHpQm/xUjwdEywnKhbQmQVZeiJslb sp5p9KCY+qa50klnZbHjPRLaMrhIo2oV/aDyxVN3kKzVp5uX9p9etlkIoTBrD2hOfvkP qWFA== X-Gm-Message-State: AO0yUKWQa7low5YXFGP4MOD0DLuNdVMzbEa6BQA3er03Mu2IuSXIObMN QyI2t3TcvbJpvZWviyLVt5IlsrEVsP4= X-Google-Smtp-Source: AK7set9Yw38AXdWKTGvpIOUs7nGALUrb4vZ+A2fpL7v79d0+g9bCMtCyTOhPZFO4GpefUq+xG2KCUw== X-Received: by 2002:a17:902:cacd:b0:196:68ee:f363 with SMTP id y13-20020a170902cacd00b0019668eef363mr5087318pld.69.1676562051903; Thu, 16 Feb 2023 07:40:51 -0800 (PST) Received: from localhost ([198.11.178.15]) by smtp.gmail.com with ESMTPSA id j12-20020a170902c3cc00b00198e346c35dsm1491649plj.9.2023.02.16.07.40.51 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Feb 2023 07:40:51 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V3 09/14] kvm: x86/mmu: Use kvm_mmu_invalidate_addr() in kvm_mmu_invpcid_gva() Date: Thu, 16 Feb 2023 23:41:15 +0800 Message-Id: <20230216154115.710033-10-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230216154115.710033-1-jiangshanlai@gmail.com> References: <20230216154115.710033-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Use kvm_mmu_invalidate_addr() instead open calls to mmu->invlpg(). No functional change intended. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 21 +++++++-------------- 1 file changed, 7 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 9f261e444a32..c48f98fbd6ae 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5814,27 +5814,20 @@ EXPORT_SYMBOL_GPL(kvm_mmu_invlpg); void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid) { struct kvm_mmu *mmu = vcpu->arch.mmu; - bool tlb_flush = false; + unsigned long roots = 0; uint i; - if (pcid == kvm_get_active_pcid(vcpu)) { - if (mmu->invlpg) - mmu->invlpg(vcpu, gva, mmu->root.hpa); - tlb_flush = true; - } + if (pcid == kvm_get_active_pcid(vcpu)) + roots |= KVM_MMU_ROOT_CURRENT; for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) { if (VALID_PAGE(mmu->prev_roots[i].hpa) && - pcid == kvm_get_pcid(vcpu, mmu->prev_roots[i].pgd)) { - if (mmu->invlpg) - mmu->invlpg(vcpu, gva, mmu->prev_roots[i].hpa); - tlb_flush = true; - } + pcid == kvm_get_pcid(vcpu, mmu->prev_roots[i].pgd)) + roots |= KVM_MMU_ROOT_PREVIOUS(i); } - if (tlb_flush) - static_call(kvm_x86_flush_tlb_gva)(vcpu, gva); - + if (roots) + kvm_mmu_invalidate_addr(vcpu, mmu, gva, roots); ++vcpu->stat.invlpg; /* From patchwork Thu Feb 16 23:53:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13144155 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85B12C61DA4 for ; Thu, 16 Feb 2023 23:52:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229637AbjBPXwZ (ORCPT ); Thu, 16 Feb 2023 18:52:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51744 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229483AbjBPXwX (ORCPT ); Thu, 16 Feb 2023 18:52:23 -0500 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AED214BEAA; Thu, 16 Feb 2023 15:52:22 -0800 (PST) Received: by mail-pl1-x631.google.com with SMTP id b5so3838244plz.5; Thu, 16 Feb 2023 15:52:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9SlhoB6i+2mk1EJVjbBztFsISdkHxk3ce01t+gVZR5M=; b=kiXZiS3WWc/7Zy1SOoCV4QCxMSZr2DwCbhgt7byORT2MyV/qZLAxeyapO5zzm0kcC2 Vu9kWR+1yENW1Zin4ITOqpflqF8GLkKK+2sK+P/vupS822b8xdRXeI9XYkedmzvmoSRt ZbIj2fgFmy8/tofLpFBpdN014PlUsXt731e5BRrehQolhAD73tSgdPZy4FNZe9jCkiGw PWFE+zmf3riS83yyP4eonhJNHaZ62AiOuscD7b6RKmlFPc0DXQCV1yXvL7nGCdxwqI3P Ck0RM6D6Hw2otibc0NyFeC8l5z6gYZ0g3bUBOg39yKfdrY+ZrTMi9YFGkUz9uAMYSESG w0YA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9SlhoB6i+2mk1EJVjbBztFsISdkHxk3ce01t+gVZR5M=; b=uPS6En0Ggd6VUyjlpnyxuIiIecY4UhFf/6HNWBCJxv8bJhtT7zZF+CRxg0yGZPlbUg XmadK9Sri98yqXUC708eLuXmhFdxeMOESu63pdWETmraY3RKPODV7m60qU7zrdVDptTy iB8wB4pnEVpH2PIA2+Z7B1Fs6rNvVIA2Znlay2FNhda1Xz4KWEzE5sTU5vqO/Ch/Jd7Z BIDj5/E51Qjq2GLRNzCpPmk57+/HqEr3ZXhOQECBemh+fYLmC4lue3o0I9RU0Fmtid7W 1TnYnOJPJmWkQiqc8Jc9wQOoPFg8qxGduFsLatrBw+8PXhaS62mhyw1Rwn7K0tYCXW9V mXiQ== X-Gm-Message-State: AO0yUKWOrno7yxjwCoqy9SXhNmtSQBHAlwgW/cagB3aSru30n4ZiE1sC 0DrDCv4VNF+GXYJhN1J1mm/xZiv2pwg= X-Google-Smtp-Source: AK7set+ZgLvFcBOHeA3PISBcN6oxKM9ZjIkdXMjIQLNGinDBRIUlYLibkQ52ZGgO4lUV+58uVYPxkQ== X-Received: by 2002:a17:902:d486:b0:19a:b033:2bb1 with SMTP id c6-20020a170902d48600b0019ab0332bb1mr10260416plg.44.1676591541853; Thu, 16 Feb 2023 15:52:21 -0800 (PST) Received: from localhost ([47.89.225.180]) by smtp.gmail.com with ESMTPSA id h2-20020a170902f7c200b00194caf3e975sm1830488plw.208.2023.02.16.15.52.20 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Feb 2023 15:52:21 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V3 10/14] kvm: x86/mmu: Use kvm_mmu_invalidate_addr() in nested_ept_invalidate_addr() Date: Fri, 17 Feb 2023 07:53:17 +0800 Message-Id: <20230216235321.735214-1-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230216154115.710033-1-jiangshanlai@gmail.com> References: <20230216154115.710033-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Use kvm_mmu_invalidate_addr() instead open calls to mmu->invlpg(). No functional change intended. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 1 + arch/x86/kvm/vmx/nested.c | 5 ++++- 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c48f98fbd6ae..9b5e3afbcdb4 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5791,6 +5791,7 @@ void kvm_mmu_invalidate_addr(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, mmu->invlpg(vcpu, addr, mmu->prev_roots[i].hpa); } } +EXPORT_SYMBOL_GPL(kvm_mmu_invalidate_addr); void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva) { diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 557b9c468734..cb502bbaee87 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -358,6 +358,7 @@ static bool nested_ept_root_matches(hpa_t root_hpa, u64 root_eptp, u64 eptp) static void nested_ept_invalidate_addr(struct kvm_vcpu *vcpu, gpa_t eptp, gpa_t addr) { + unsigned long roots = 0; uint i; struct kvm_mmu_root_info *cached_root; @@ -368,8 +369,10 @@ static void nested_ept_invalidate_addr(struct kvm_vcpu *vcpu, gpa_t eptp, if (nested_ept_root_matches(cached_root->hpa, cached_root->pgd, eptp)) - vcpu->arch.mmu->invlpg(vcpu, addr, cached_root->hpa); + roots |= KVM_MMU_ROOT_PREVIOUS(i); } + if (roots) + kvm_mmu_invalidate_addr(vcpu, vcpu->arch.mmu, addr, roots); } static void nested_ept_inject_page_fault(struct kvm_vcpu *vcpu, From patchwork Thu Feb 16 23:53:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13144156 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA2BBC61DA4 for ; Thu, 16 Feb 2023 23:52:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229879AbjBPXwc (ORCPT ); Thu, 16 Feb 2023 18:52:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51786 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229614AbjBPXw2 (ORCPT ); Thu, 16 Feb 2023 18:52:28 -0500 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 75FEE4BEAA; Thu, 16 Feb 2023 15:52:27 -0800 (PST) Received: by mail-pl1-x632.google.com with SMTP id q15so245353plx.13; Thu, 16 Feb 2023 15:52:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2T3A+NeBb/ccoI9DDnY4IUQQOPg/c+nwhtLLZFEmqEM=; b=NR+n4Siy5dXYe7kU/AvBxHFyugjEXMuEhh5Pks0mmzfgKbjdI+ZLgVznSLll2yBI1l SWaxQk2Vi2kOQOnneE/asqszJeyscY6CmbZcEj/MVeQOh68RquIbXKe3PZSS6Hu3RpCH AN8R3XTd4bO9OggSnNZA8WlMxCa6mBZyBmu+9IyyapmJ1bcVhG2To/oltOAd8bMB4Lt7 97anQkA7caomQwFJ4AhFhokZ5zsXgLpawMST8Yu7FLBnUj+dxw4gv4VsYFuH1RUNGr74 9ZTTanjw0ycoazZbpfdl+hAI0BMJF9EagS5FehtYJ8t2flQ+gLl05WjzaKf6hCaCMXgc PGJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2T3A+NeBb/ccoI9DDnY4IUQQOPg/c+nwhtLLZFEmqEM=; b=m9z1ic8OFK+r4bEBjKsOZ5qG4+96suxMSk9hAaISll0vQLC1HkyPMedm7cFRbWm4Rz Qe9HWK0aILi8jk0zJ5WWAUyXwkojGVVMblwKBlqoii0Jxy1Vb9hZbtTATJmwy9ldP71v kn44t7tEV3Yt5FVWIn0SzNoOipKj4PzKNvxEmQKbG9z0sOwmZ4SPcKLQMYsryEwQo8YL uHz1KSd3bFg/Bq1QjlvhTCoN9ZiqbFedTeqYPfhA0QPsFQaykUxaQS+g3AU4ZI929WFF is8/K0sGaqTMH3RB+Xiru15dXK3ENh0x290cxlqJ5JPy/FZgLowbVqj6W8VFmyybAuSF MuwQ== X-Gm-Message-State: AO0yUKWIX0c4N8D5tqXmkuLGx7J7OQMw+7wKz+yO77NMypE6qjqh7k5p G5Gfh+e07vli3+FAdxFeesh6M9l+CpA= X-Google-Smtp-Source: AK7set9t3Kj7SGI5txkaX7nq7X2HbHjUNjVGtrkDxomkAQX57eAoU7sCztMS9pGaWm63UfPUL5wwEw== X-Received: by 2002:a17:903:32d1:b0:19a:7d73:ef47 with SMTP id i17-20020a17090332d100b0019a7d73ef47mr4498268plr.17.1676591546665; Thu, 16 Feb 2023 15:52:26 -0800 (PST) Received: from localhost ([198.11.178.15]) by smtp.gmail.com with ESMTPSA id v10-20020a1709029a0a00b0019abb539cddsm1851958plp.10.2023.02.16.15.52.25 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Feb 2023 15:52:26 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V3 11/14] kvm: x86/mmu: Allow the roots to be invalid in FNAME(invlpg) Date: Fri, 17 Feb 2023 07:53:18 +0800 Message-Id: <20230216235321.735214-2-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230216235321.735214-1-jiangshanlai@gmail.com> References: <20230216154115.710033-1-jiangshanlai@gmail.com> <20230216235321.735214-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Don't assume the current root to be valid, just check it and remove the WARN(). Also move the code to check if the root is valid into FNAME(invlpg) to simplify the code. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 3 +-- arch/x86/kvm/mmu/paging_tmpl.h | 4 +--- 2 files changed, 2 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 9b5e3afbcdb4..7d5ff2b0f6d5 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5786,8 +5786,7 @@ void kvm_mmu_invalidate_addr(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, mmu->invlpg(vcpu, addr, mmu->root.hpa); for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) { - if ((roots & KVM_MMU_ROOT_PREVIOUS(i)) && - VALID_PAGE(mmu->prev_roots[i].hpa)) + if (roots & KVM_MMU_ROOT_PREVIOUS(i)) mmu->invlpg(vcpu, addr, mmu->prev_roots[i].hpa); } } diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 7db167876cd7..9be5a0f22a9f 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -904,10 +904,8 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, u64 addr, hpa_t root_hpa) */ mmu_topup_memory_caches(vcpu, true); - if (!VALID_PAGE(root_hpa)) { - WARN_ON(1); + if (!VALID_PAGE(root_hpa)) return; - } write_lock(&vcpu->kvm->mmu_lock); for_each_shadow_entry_using_root(vcpu, root_hpa, addr, iterator) { From patchwork Thu Feb 16 23:53:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13144157 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA284C61DA4 for ; Thu, 16 Feb 2023 23:52:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230100AbjBPXwg (ORCPT ); Thu, 16 Feb 2023 18:52:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51866 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229614AbjBPXwe (ORCPT ); Thu, 16 Feb 2023 18:52:34 -0500 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8E00F5381D; Thu, 16 Feb 2023 15:52:32 -0800 (PST) Received: by mail-pj1-x102c.google.com with SMTP id d2-20020a17090a498200b00236679bc70cso411935pjh.4; Thu, 16 Feb 2023 15:52:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kQHFsymwP1ZZLc0CFl6UQv9PCMMVPDakGTSI35SyXo8=; b=j8dM48D1yPnvJXNL0l3p+ZrMTIijcSe9Sh/B1SKrGF6dTSH9zeNUcIzNJdFvw8J1Uv n4I9dlNhFGbTp7pUkEdm0+t9J4V+A7QP2oAkNOhtdcQBjTwn3O/fvI0SFIqjehZZXGki 150EmfxC2d2TkzKbxX9xkMYLKkcGxYjjAHFRbYlhJMeLpcsLm+wu8a8E7aYN/+7W/05i SpYziElQ4eO2mvt56bHgOdB8KADA+jSOthDszDll5mGI6asay/PoyT0MpBIdNjeqQvXY M+xiw/Hbdflyt34VmTDsIjTWE4JBXUZEk5wkIsPXi3T0R1+6H2iwkjayV+CKhU9PkUCO y96w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kQHFsymwP1ZZLc0CFl6UQv9PCMMVPDakGTSI35SyXo8=; b=sXNHDWBoVXHAxhk32jKxeH7pDqmkT3I+izb/0ksKusD3P54tSZkLlNEvFdSO5SZdrE V35r26AP/4X60+k089UbTwpce7Pcgb2jU6i2mBb0wsu17XWFIOgcesIKIQ5j0N3RNCWF ujuWOoAWGeGkTbIwLj3hiD0Q/jvOUnKa7h5CXSuVvfzln6B3HdvMQbizPfMA8qRT/7dJ 9mn5odcarKCxvleN163MF1j5VmnxotUDvuATshBDMhx5TX0fDBzUOzI6u49AK+Mo8n5K 2imiv9F3vjEvM2GFPq11XLgMiZiAHSNOxYo/oZtXTHOGfRLLbWbJrgs1pmZR9hZcwt1y MV+A== X-Gm-Message-State: AO0yUKVsbAvoOl0jSH94H4Kmq4T4yJGS0ngdXz708slWE+olIdz9Co2n ExiOUgbEeJXA2afwRAYnvZ+vJKubk+E= X-Google-Smtp-Source: AK7set/qPa6z8JxZhJvhjWgBZtOAHVA1bWMO2sDIblORgkkRNg4v+TSkQSLYRzNsosJwtAM5TO3KIQ== X-Received: by 2002:a17:903:1388:b0:19a:b869:f2f8 with SMTP id jx8-20020a170903138800b0019ab869f2f8mr6093825plb.21.1676591551765; Thu, 16 Feb 2023 15:52:31 -0800 (PST) Received: from localhost ([47.254.32.37]) by smtp.gmail.com with ESMTPSA id a14-20020a170902ecce00b0019934030f46sm1840279plh.132.2023.02.16.15.52.30 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Feb 2023 15:52:30 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V3 12/14] kvm: x86/mmu: Remove FNAME(invlpg) and use FNAME(sync_spte) to update vTLB instead. Date: Fri, 17 Feb 2023 07:53:19 +0800 Message-Id: <20230216235321.735214-3-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230216235321.735214-1-jiangshanlai@gmail.com> References: <20230216154115.710033-1-jiangshanlai@gmail.com> <20230216235321.735214-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan In hardware TLB, invalidating TLB entries means the translations are removed from the TLB. In KVM shadowed vTLB, the translations (combinations of shadow paging and hardware TLB) are generally maintained as long as they remain clean when the TLB of an address space (i.e. a PCID or all) is flushed with the help of write-protections, sp->unsync, and kvm_sync_page(). However, a single vTLB entry is always removed in FNAME(invlpg) if sp->unsync and then recreated, and thus a remote flush is required even the original vTLB entry is clean. Besides this, it is a duplicate implementation of FNAME(sync_spte) to invalidate a vTLB entry. To address this, FNAME(sync_spte) can be used to share the code and slightly modify the semantics, where clean vTLB entries are kept. Signed-off-by: Lai Jiangshan --- arch/x86/include/asm/kvm_host.h | 1 - arch/x86/kvm/mmu/mmu.c | 56 ++++++++++++++++++++++---------- arch/x86/kvm/mmu/paging_tmpl.h | 57 --------------------------------- 3 files changed, 39 insertions(+), 75 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index cce4243d6688..79dbf20ca026 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -447,7 +447,6 @@ struct kvm_mmu { struct x86_exception *exception); int (*sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, int i); - void (*invlpg)(struct kvm_vcpu *vcpu, u64 addr, hpa_t root_hpa); struct kvm_mmu_root_info root; union kvm_cpu_role cpu_role; union kvm_mmu_page_role root_role; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7d5ff2b0f6d5..a8ac37d51287 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1073,14 +1073,6 @@ static struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, return &slot->arch.rmap[level - PG_LEVEL_4K][idx]; } -static bool rmap_can_add(struct kvm_vcpu *vcpu) -{ - struct kvm_mmu_memory_cache *mc; - - mc = &vcpu->arch.mmu_pte_list_desc_cache; - return kvm_mmu_memory_cache_nr_free_objects(mc); -} - static void rmap_remove(struct kvm *kvm, u64 *spte) { struct kvm_memslots *slots; @@ -4527,7 +4519,6 @@ static void nonpaging_init_context(struct kvm_mmu *context) context->page_fault = nonpaging_page_fault; context->gva_to_gpa = nonpaging_gva_to_gpa; context->sync_spte = NULL; - context->invlpg = NULL; } static inline bool is_root_usable(struct kvm_mmu_root_info *root, gpa_t pgd, @@ -5118,7 +5109,6 @@ static void paging64_init_context(struct kvm_mmu *context) context->page_fault = paging64_page_fault; context->gva_to_gpa = paging64_gva_to_gpa; context->sync_spte = paging64_sync_spte; - context->invlpg = paging64_invlpg; } static void paging32_init_context(struct kvm_mmu *context) @@ -5126,7 +5116,6 @@ static void paging32_init_context(struct kvm_mmu *context) context->page_fault = paging32_page_fault; context->gva_to_gpa = paging32_gva_to_gpa; context->sync_spte = paging32_sync_spte; - context->invlpg = paging32_invlpg; } static union kvm_cpu_role @@ -5215,7 +5204,6 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, context->root_role.word = root_role.word; context->page_fault = kvm_tdp_page_fault; context->sync_spte = NULL; - context->invlpg = NULL; context->get_guest_pgd = get_cr3; context->get_pdptr = kvm_pdptr_read; context->inject_page_fault = kvm_inject_page_fault; @@ -5347,7 +5335,6 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, context->page_fault = ept_page_fault; context->gva_to_gpa = ept_gva_to_gpa; context->sync_spte = ept_sync_spte; - context->invlpg = ept_invlpg; update_permission_bitmask(context, true); context->pkru_mask = 0; @@ -5388,7 +5375,7 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu, * L2 page tables are never shadowed, so there is no need to sync * SPTEs. */ - g_context->invlpg = NULL; + g_context->sync_spte = NULL; /* * Note that arch.mmu->gva_to_gpa translates l2_gpa to l1_gpa using @@ -5763,6 +5750,41 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err } EXPORT_SYMBOL_GPL(kvm_mmu_page_fault); +static void __kvm_mmu_invalidate_addr(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, + u64 addr, hpa_t root_hpa) +{ + struct kvm_shadow_walk_iterator iterator; + + vcpu_clear_mmio_info(vcpu, addr); + + if (!VALID_PAGE(root_hpa)) + return; + + write_lock(&vcpu->kvm->mmu_lock); + for_each_shadow_entry_using_root(vcpu, root_hpa, addr, iterator) { + struct kvm_mmu_page *sp = sptep_to_sp(iterator.sptep); + + if (sp->unsync) { + /* + * Get the gfn beforehand for later flushing. + * Although mmu->sync_spte() doesn't change it, but just + * avoid the dependence. + */ + gfn_t gfn = kvm_mmu_page_get_gfn(sp, iterator.index); + int ret = mmu->sync_spte(vcpu, sp, iterator.index); + + if (ret < 0) + mmu_page_zap_pte(vcpu->kvm, sp, iterator.sptep, NULL); + if (ret) + kvm_flush_remote_tlbs_gfn(vcpu->kvm, gfn, PG_LEVEL_4K); + } + + if (!sp->unsync_children) + break; + } + write_unlock(&vcpu->kvm->mmu_lock); +} + void kvm_mmu_invalidate_addr(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, u64 addr, unsigned long roots) { @@ -5779,15 +5801,15 @@ void kvm_mmu_invalidate_addr(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, static_call(kvm_x86_flush_tlb_gva)(vcpu, addr); } - if (!mmu->invlpg) + if (!mmu->sync_spte) return; if (roots & KVM_MMU_ROOT_CURRENT) - mmu->invlpg(vcpu, addr, mmu->root.hpa); + __kvm_mmu_invalidate_addr(vcpu, mmu, addr, mmu->root.hpa); for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) { if (roots & KVM_MMU_ROOT_PREVIOUS(i)) - mmu->invlpg(vcpu, addr, mmu->prev_roots[i].hpa); + __kvm_mmu_invalidate_addr(vcpu, mmu, addr, mmu->prev_roots[i].hpa); } } EXPORT_SYMBOL_GPL(kvm_mmu_invalidate_addr); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 9be5a0f22a9f..fca5ce349d9d 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -887,63 +887,6 @@ static gpa_t FNAME(get_level1_sp_gpa)(struct kvm_mmu_page *sp) return gfn_to_gpa(sp->gfn) + offset * sizeof(pt_element_t); } -/* Note, @addr is a GPA when invlpg() invalidates an L2 GPA translation in shadowed TDP */ -static void FNAME(invlpg)(struct kvm_vcpu *vcpu, u64 addr, hpa_t root_hpa) -{ - struct kvm_shadow_walk_iterator iterator; - struct kvm_mmu_page *sp; - u64 old_spte; - int level; - u64 *sptep; - - vcpu_clear_mmio_info(vcpu, addr); - - /* - * No need to check return value here, rmap_can_add() can - * help us to skip pte prefetch later. - */ - mmu_topup_memory_caches(vcpu, true); - - if (!VALID_PAGE(root_hpa)) - return; - - write_lock(&vcpu->kvm->mmu_lock); - for_each_shadow_entry_using_root(vcpu, root_hpa, addr, iterator) { - level = iterator.level; - sptep = iterator.sptep; - - sp = sptep_to_sp(sptep); - old_spte = *sptep; - if (is_last_spte(old_spte, level)) { - pt_element_t gpte; - gpa_t pte_gpa; - - if (!sp->unsync) - break; - - pte_gpa = FNAME(get_level1_sp_gpa)(sp); - pte_gpa += spte_index(sptep) * sizeof(pt_element_t); - - mmu_page_zap_pte(vcpu->kvm, sp, sptep, NULL); - if (is_shadow_present_pte(old_spte)) - kvm_flush_remote_tlbs_sptep(vcpu->kvm, sptep); - - if (!rmap_can_add(vcpu)) - break; - - if (kvm_vcpu_read_guest_atomic(vcpu, pte_gpa, &gpte, - sizeof(pt_element_t))) - break; - - FNAME(prefetch_gpte)(vcpu, sp, sptep, gpte, false); - } - - if (!sp->unsync_children) - break; - } - write_unlock(&vcpu->kvm->mmu_lock); -} - /* Note, @addr is a GPA when gva_to_gpa() translates an L2 GPA to an L1 GPA. */ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, gpa_t addr, u64 access, From patchwork Thu Feb 16 23:53:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13144158 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8943EC636CC for ; Thu, 16 Feb 2023 23:52:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230196AbjBPXwp (ORCPT ); Thu, 16 Feb 2023 18:52:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52388 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230176AbjBPXwm (ORCPT ); Thu, 16 Feb 2023 18:52:42 -0500 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E626E595B6; Thu, 16 Feb 2023 15:52:37 -0800 (PST) Received: by mail-pl1-x632.google.com with SMTP id m2so3845156plg.4; Thu, 16 Feb 2023 15:52:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=W618oeFBR26jFkCOGW2ktI8DuH0UezwdqqlJmNvjLzY=; b=W8Wgzlc5HTDOQZd3tXWZTJ1tVwWTSqV2FA/0SkeXX3P+Ec3FBA2DrpvHPha60g7OVi CZd6pNFACqU1zxZgUFAN/4g/qFH+TIotx5ub40Fx049NbGnSTeLzQg+p6oiQaqbc5dO/ wUd/Wu01jZp8pEi8EFB69Qt6DmfuRPeqFpBWYFPMwmbs0chvwPiGADZEHP6d8DhULJkV 4sm+ExbagRV0dQ4qlarHsvdqEPuKXssuJxT5sV1UVYsWdigVv11i2TBolVLD4c3vH2kM JW1w7IeTiqUxLs4RKJ2B/1TLjTrP4U2c32WTJMyJij4q4XVrfO/7xff37MZIwL9wDQsQ RfEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=W618oeFBR26jFkCOGW2ktI8DuH0UezwdqqlJmNvjLzY=; b=S1aIf/7ccC0Hx5iNSVGdQNsY1uJxYasIhT6KlfvTiO6sUsd8A16ZHqeqY/VenYjwrS eh9o70bfjg0EUGOruEVyzUQrBdFqpeS+SrjGwRvOtsIvwT6GSQHFfdA8Qc9xFmj+Y7AW utm7LL8ZexQxh3w4v2ktVmGT9xEZzoqJgk3Ae6Yu5l2dyXL513K/i5ZiIj+OuqiJmaDD utPnoamd+9RCsjERWw7/cEQQp9SZlVFt7sqdff0tUwvIsRrpjcic5hgeP+DghTb4zNOe s6loVhliLquIB4zGrJPP2YGoBjkyWmMbw4n3XBmI+Mb0UZ3ArosxxD6vK85bt0KDy4u4 DUtQ== X-Gm-Message-State: AO0yUKXXUBQOPBfPMvqJ3MA7RFipvpLVPj4RRy/5Pu3YJ/oaP4f6o6vq kzr/5+eA1X8Bo6F7Zf0zg0pk7F19Bww= X-Google-Smtp-Source: AK7set/aWiGJ0dUGJCFPTjIYBIEXTk7bEenkR3qz9ZJyaZqfTVdFKlnQ4suDVT1bGqaKq4a9EU7Msw== X-Received: by 2002:a17:902:e888:b0:198:e1b8:9476 with SMTP id w8-20020a170902e88800b00198e1b89476mr9154576plg.15.1676591556678; Thu, 16 Feb 2023 15:52:36 -0800 (PST) Received: from localhost ([198.11.178.15]) by smtp.gmail.com with ESMTPSA id 6-20020a170902c10600b0019a8530c063sm1848175pli.102.2023.02.16.15.52.35 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Feb 2023 15:52:35 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V3 13/14] kvm: x86/mmu: Remove @no_dirty_log from FNAME(prefetch_gpte) Date: Fri, 17 Feb 2023 07:53:20 +0800 Message-Id: <20230216235321.735214-4-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230216235321.735214-1-jiangshanlai@gmail.com> References: <20230216154115.710033-1-jiangshanlai@gmail.com> <20230216235321.735214-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan FNAME(prefetch_gpte) is always called with @no_dirty_log=true. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/paging_tmpl.h | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index fca5ce349d9d..e04950015dc4 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -519,7 +519,7 @@ static int FNAME(walk_addr)(struct guest_walker *walker, static bool FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, - u64 *spte, pt_element_t gpte, bool no_dirty_log) + u64 *spte, pt_element_t gpte) { struct kvm_memory_slot *slot; unsigned pte_access; @@ -535,8 +535,7 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, pte_access = sp->role.access & FNAME(gpte_access)(gpte); FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte); - slot = gfn_to_memslot_dirty_bitmap(vcpu, gfn, - no_dirty_log && (pte_access & ACC_WRITE_MASK)); + slot = gfn_to_memslot_dirty_bitmap(vcpu, gfn, pte_access & ACC_WRITE_MASK); if (!slot) return false; @@ -605,7 +604,7 @@ static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, struct guest_walker *gw, if (is_shadow_present_pte(*spte)) continue; - if (!FNAME(prefetch_gpte)(vcpu, sp, spte, gptep[i], true)) + if (!FNAME(prefetch_gpte)(vcpu, sp, spte, gptep[i])) break; } } From patchwork Thu Feb 16 23:53:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13144159 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE7E4C636CC for ; Thu, 16 Feb 2023 23:52:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230186AbjBPXwz (ORCPT ); Thu, 16 Feb 2023 18:52:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52442 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229614AbjBPXwy (ORCPT ); Thu, 16 Feb 2023 18:52:54 -0500 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73E9B5970D; Thu, 16 Feb 2023 15:52:42 -0800 (PST) Received: by mail-pl1-x62c.google.com with SMTP id i15so1038328plr.8; Thu, 16 Feb 2023 15:52:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TLM5HhVSLORC9QXIXio42MzhLCn6tcOUYAlcgLOohjE=; b=CwuGdG5qrLoCiTVuB8/g669w8n8HJJv/ppoCvYESz6x/zamB4/vLZBEzYtxva3CP8n SpLY04olKZ0EmyUpwcIxs/gzGTxWff/GW3yMGDYUXkMz3m+WaEET9OOTjD+yIUqrfqu7 cso3RmiKt11vuYCvCPTJjAK3wXGm2EE6L2MUfdm230nFEQgmHFM2n3mkTLwCCFzgIZdv r36yzKarJUnWN3h6eq6ogFnz2Kw96r/Olms9G78YLjYl7cDUTp926o6eyP5HZUbwvTtE qhqlGTLusMMr0TtZARRVPaWH09ugR5BTD5OjspfmjakC+B9XgLV29TwrJTdImrQR8b9h H1Sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TLM5HhVSLORC9QXIXio42MzhLCn6tcOUYAlcgLOohjE=; b=blgPl9Pk4eNOuF5w7wnOzWbml68ly7Mxx+MeQDtN2mIhsqF2V1oA0AkSeFidKVxwCZ zEDplfj4Qzf7ak2LrrZzckZphSpZX8oXaGu4dxJf/hbLmN87USFXX1HZbparvDBiJI+6 +XNs1wfoIiRoK9X3rxQz/c+gFeg40qecb9VwLV7pREkKAQBNnix8KRi1gbMpz8LZ/J+3 JUgYB3U1YRrSPCA4T8R3sty/bUVCp9+k4VKjZt875N6+AHuIeOdndhNZ/34nBE2XrHn+ trQFMBBMMbLjMaPZ1fcR4VnST8ZWhivnBXE7Xm+xJ9YBaX1RcAOOGM8AS1/2/+gwJySX X2PQ== X-Gm-Message-State: AO0yUKUEmD/jauVsHBcZ2kSUPSVDVnmKKvwUm6wsZf/0+pPi/F3LbLhi 5RM2vSr2WXIMyeC9VRthxWfp/A/i+fs= X-Google-Smtp-Source: AK7set8tYnq25kNntr8Qv9CceC5v6kP+DR0WONO33vmNOMgOJwgq6YTbvYirDi0SG5Kaz2fY0DS5Xg== X-Received: by 2002:a17:902:e74a:b0:19a:90ed:af6f with SMTP id p10-20020a170902e74a00b0019a90edaf6fmr8210649plf.60.1676591561059; Thu, 16 Feb 2023 15:52:41 -0800 (PST) Received: from localhost ([47.89.225.180]) by smtp.gmail.com with ESMTPSA id bi12-20020a170902bf0c00b00194c1281ca9sm1839329plb.166.2023.02.16.15.52.40 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Feb 2023 15:52:40 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V3 14/14] kvm: x86/mmu: Skip calling mmu->sync_spte() when the spte is 0 Date: Fri, 17 Feb 2023 07:53:21 +0800 Message-Id: <20230216235321.735214-5-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230216235321.735214-1-jiangshanlai@gmail.com> References: <20230216154115.710033-1-jiangshanlai@gmail.com> <20230216235321.735214-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Sync the spte only when the spte is set and avoid the indirect branch. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 4 ++-- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a8ac37d51287..cd8c38463c97 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1942,7 +1942,7 @@ static int __kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) return -1; for (i = 0; i < SPTE_ENT_PER_PAGE; i++) { - int ret = vcpu->arch.mmu->sync_spte(vcpu, sp, i); + int ret = sp->spt[i] ? vcpu->arch.mmu->sync_spte(vcpu, sp, i) : 0; if (ret < -1) return -1; @@ -5764,7 +5764,7 @@ static void __kvm_mmu_invalidate_addr(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu for_each_shadow_entry_using_root(vcpu, root_hpa, addr, iterator) { struct kvm_mmu_page *sp = sptep_to_sp(iterator.sptep); - if (sp->unsync) { + if (sp->unsync && *iterator.sptep) { /* * Get the gfn beforehand for later flushing. * Although mmu->sync_spte() doesn't change it, but just diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index e04950015dc4..3373d6705634 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -933,7 +933,7 @@ static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, int gpa_t pte_gpa; gfn_t gfn; - if (!sp->spt[i]) + if (WARN_ON_ONCE(!sp->spt[i])) return 0; first_pte_gpa = FNAME(get_level1_sp_gpa)(sp);