From patchwork Thu Jan 5 09:58:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13089624 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FBABC54EBD for ; Thu, 5 Jan 2023 09:57:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232030AbjAEJ5w (ORCPT ); Thu, 5 Jan 2023 04:57:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54658 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231784AbjAEJ5l (ORCPT ); Thu, 5 Jan 2023 04:57:41 -0500 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E22414FD53; Thu, 5 Jan 2023 01:57:39 -0800 (PST) Received: by mail-pf1-x42a.google.com with SMTP id b145so20857316pfb.2; Thu, 05 Jan 2023 01:57:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=g7cSnQZtfOkc0nUwfeL6JgSa31TSiifLjEgKqY7YlAA=; b=XQDTYyfsrmhNgLvGzoVS5XkpzkqSziUmhaTw6oBQiA4k0nlkV8caq/WYT6adKmJTAA 0SfkxW5noGqfvRZgsTSOqKUc21QZKbZTa3MHGsPw/IM5OAdjYyE+9xlOfz+i4RCszfCv bHKKlLvYFEtqJCbWGNJVlqTuWDA56WGANA6+T5gBoH8KkZ0KSgMP3zrK7G48TUS0y5GH ThAsBi2V35S2IZFEcKPivThc7sberYMDEzzMdUy2aBLhFjdQ3aSDytDA972qT0mULgsp Q3H24+zP+palAKCoBEm8zILd/GIFyU5DU0qjtCkrZMC6Sl/qBw2s3j8tUqPvee3Xn5Ym Svig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=g7cSnQZtfOkc0nUwfeL6JgSa31TSiifLjEgKqY7YlAA=; b=r0drdT0LD7fVkb3+yBf7rV3w1qcc1f2bDvinBggKBWZAz0XE/O20XthXrWujgV+/Uw O55p0zozJVcOy//WAVraAqUmKbgb8cX+rmSbFtdJEKrvVgeEQNilUDCqkiJytV5W3U0V W4/WY+a/00YprUOOlJnwVUxliWHXf0qixkGs4CcKPrT8ak+2QWal3ueH2XXA0JKcRY5u U715w64oZ+9q2QWY9pzID2Hx78S4m8Gs/MN0QG2aqNbZlJv7+vzMQs0AiqLd5t3s/d5i JbJdk9EvFE8ST01CKNDCNTx0+MKkwOfl3Mp/5rpSKKxo6S38r9p7JhE1KREAyAF+Fml4 ahpg== X-Gm-Message-State: AFqh2kqxohkxhcu0SOtyobzjvK22FKj4gpPU82oSbK4PVy9z9lG2gik0 mKlSzLgYzIST54IL3nqYO6HTt3Geatg= X-Google-Smtp-Source: AMrXdXvv9E13hRW+atwVuESSkXXQ1OtpZh4Zr0GgqpMH5mfs0Qc2cF3GQQ65jSwGXnlbUmDHc/T8LA== X-Received: by 2002:a62:19cd:0:b0:580:9935:ffdb with SMTP id 196-20020a6219cd000000b005809935ffdbmr39899653pfz.20.1672912658832; Thu, 05 Jan 2023 01:57:38 -0800 (PST) Received: from localhost ([47.89.225.180]) by smtp.gmail.com with ESMTPSA id x30-20020aa79a5e000000b005827d78ff27sm7362308pfj.32.2023.01.05.01.57.37 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Jan 2023 01:57:38 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH 1/7] kvm: x86/mmu: Use KVM_MMU_ROOT_XXX for kvm_mmu_invalidate_gva() Date: Thu, 5 Jan 2023 17:58:42 +0800 Message-Id: <20230105095848.6061-2-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230105095848.6061-1-jiangshanlai@gmail.com> References: <20230105095848.6061-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan The @root_hpa for kvm_mmu_invalidate_gva() is called with @mmu->root.hpa or INVALID_PAGE. Replace them with KVM_MMU_ROOT_XXX. No fuctionalities changed. Signed-off-by: Lai Jiangshan --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/mmu/mmu.c | 39 ++++++++++++++++----------------- arch/x86/kvm/x86.c | 2 +- 3 files changed, 21 insertions(+), 22 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 2f5bf581d00a..dbea616bccce 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2026,7 +2026,7 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 error_code, void *insn, int insn_len); void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva); void kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, - gva_t gva, hpa_t root_hpa); + gva_t gva, ulong roots_to_invalidate); void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid); void kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t new_pgd); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 5407649de547..90339b71bd56 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5693,8 +5693,9 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err } EXPORT_SYMBOL_GPL(kvm_mmu_page_fault); +/* roots_to_invalidte must be some combination of the KVM_MMU_ROOT_* flags */ void kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, - gva_t gva, hpa_t root_hpa) + gva_t gva, ulong roots_to_invalidate) { int i; @@ -5710,31 +5711,29 @@ void kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, if (!mmu->invlpg) return; - if (root_hpa == INVALID_PAGE) { + if ((roots_to_invalidate & KVM_MMU_ROOT_CURRENT) && VALID_PAGE(mmu->root.hpa)) mmu->invlpg(vcpu, gva, mmu->root.hpa); - /* - * INVLPG is required to invalidate any global mappings for the VA, - * irrespective of PCID. Since it would take us roughly similar amount - * of work to determine whether any of the prev_root mappings of the VA - * is marked global, or to just sync it blindly, so we might as well - * just always sync it. - * - * Mappings not reachable via the current cr3 or the prev_roots will be - * synced when switching to that cr3, so nothing needs to be done here - * for them. - */ - for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) - if (VALID_PAGE(mmu->prev_roots[i].hpa)) - mmu->invlpg(vcpu, gva, mmu->prev_roots[i].hpa); - } else { - mmu->invlpg(vcpu, gva, root_hpa); - } + for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) + if ((roots_to_invalidate & KVM_MMU_ROOT_PREVIOUS(i)) && + VALID_PAGE(mmu->prev_roots[i].hpa)) + mmu->invlpg(vcpu, gva, mmu->prev_roots[i].hpa); } void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva) { - kvm_mmu_invalidate_gva(vcpu, vcpu->arch.walk_mmu, gva, INVALID_PAGE); + /* + * INVLPG is required to invalidate any global mappings for the VA, + * irrespective of PCID. Since it would take us roughly similar amount + * of work to determine whether any of the prev_root mappings of the VA + * is marked global, or to just sync it blindly, so we might as well + * just always sync it. + * + * Mappings not reachable via the current cr3 or the prev_roots will be + * synced when switching to that cr3, so nothing needs to be done here + * for them. + */ + kvm_mmu_invalidate_gva(vcpu, vcpu->arch.walk_mmu, gva, KVM_MMU_ROOTS_ALL); ++vcpu->stat.invlpg; } EXPORT_SYMBOL_GPL(kvm_mmu_invlpg); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index c936f8d28a53..4696cbb40545 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -799,7 +799,7 @@ void kvm_inject_emulated_page_fault(struct kvm_vcpu *vcpu, if ((fault->error_code & PFERR_PRESENT_MASK) && !(fault->error_code & PFERR_RSVD_MASK)) kvm_mmu_invalidate_gva(vcpu, fault_mmu, fault->address, - fault_mmu->root.hpa); + KVM_MMU_ROOT_CURRENT); fault_mmu->inject_page_fault(vcpu, fault); } From patchwork Thu Jan 5 09:58:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13089625 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32078C3DA7A for ; Thu, 5 Jan 2023 09:57:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231370AbjAEJ5y (ORCPT ); Thu, 5 Jan 2023 04:57:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54676 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231836AbjAEJ5p (ORCPT ); Thu, 5 Jan 2023 04:57:45 -0500 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A2994FD5B; Thu, 5 Jan 2023 01:57:44 -0800 (PST) Received: by mail-pf1-x42f.google.com with SMTP id z7so19256172pfq.13; Thu, 05 Jan 2023 01:57:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5qbcS2kAh7qKqu1Tcd+kRAYagIvCht+cBr4HMCNf5SE=; b=dxbrMS6ocOOad8B62fNsSZS9ih/clJTefmEBMIJ7VgmiirhlnHxz0NsoB+pMRrMjj/ 44P8V6Knx4dXLAF5qyytPrhqM0yMWEekP23utHYQDrhFgqoOCBgocIkda0sYV+KInfxE MODKKso97BdYrspU+q4YJ+/SsyK2vvxteqmQWTayuf/msOByn0VMp7wbQfzLEG06Jl6b RO4Z6G+fLarlApjENWWkCsRowsiq1nrnoN8XRrMNXaFtZM6cNpfjlp9UnC1T8KRmVR8G V8dWLkIjwMajfgY/nhQalmYYHgda+dkHihT1nTEOXnjFpsSRyyWluTzgoSIL9fFAei17 lKYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5qbcS2kAh7qKqu1Tcd+kRAYagIvCht+cBr4HMCNf5SE=; b=NRNttAOQ5iW87M1w4RL2/GugRhUpYjzKSy32RvEWXGb3Ct7xqaGlCbiceuuCBw/AyS QbVimiJwy/gaRiBYePLHFnEYdBGE4v9JwiJCx6otyOJS8CtgL3PsOvKBQ4i+otKdS08w 3ul8UYRCcX10MQUIM2Ev+q0Eobt2Z8j1TnmsfylYgSLhCywUOwV8gI3GslYPceh0D9tR ejqr0aH+ikewwlaQzzvZf4NuUzB+0rFDoQFCgNbzdCjZ79ENXozF0nNnam9H6CbjxI5f Nj3ALAk+RbSGsnqXkXZSfxliiJIjidLBAkAg+QTMgciOMmwjU6a8MkEaBiU1YMGA+bxa soDw== X-Gm-Message-State: AFqh2krspgOG3tSIDzXvbPQ6NdOwbopqbxGzPHmMkR7W4WvZcrGZHVXh IISEKyweR0zyrHhG8gqAggwXOGRo9Kk= X-Google-Smtp-Source: AMrXdXv4uug4JkZySUhwZOKjIjQFeasqLgfgauccigVds449xxlHEN+xYGPtwVj8Ogx4a71DoTHLVA== X-Received: by 2002:a62:7bcd:0:b0:581:1c64:92bc with SMTP id w196-20020a627bcd000000b005811c6492bcmr38273815pfc.3.1672912663320; Thu, 05 Jan 2023 01:57:43 -0800 (PST) Received: from localhost ([198.11.176.14]) by smtp.gmail.com with ESMTPSA id i2-20020aa796e2000000b00581172f7456sm19549248pfq.56.2023.01.05.01.57.42 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Jan 2023 01:57:42 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH 2/7] kvm: x86/mmu: Use kvm_mmu_invalidate_gva() in kvm_mmu_invpcid_gva() Date: Thu, 5 Jan 2023 17:58:43 +0800 Message-Id: <20230105095848.6061-3-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230105095848.6061-1-jiangshanlai@gmail.com> References: <20230105095848.6061-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Use kvm_mmu_invalidate_gva() instead open calls to mmu->invlpg(). No functional change intended. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 21 +++++++-------------- 1 file changed, 7 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 90339b71bd56..b0e7ac6d4e88 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5742,27 +5742,20 @@ EXPORT_SYMBOL_GPL(kvm_mmu_invlpg); void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid) { struct kvm_mmu *mmu = vcpu->arch.mmu; - bool tlb_flush = false; + ulong roots_to_invalidate = 0; uint i; - if (pcid == kvm_get_active_pcid(vcpu)) { - if (mmu->invlpg) - mmu->invlpg(vcpu, gva, mmu->root.hpa); - tlb_flush = true; - } + if (pcid == kvm_get_active_pcid(vcpu)) + roots_to_invalidate |= KVM_MMU_ROOT_CURRENT; for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) { if (VALID_PAGE(mmu->prev_roots[i].hpa) && - pcid == kvm_get_pcid(vcpu, mmu->prev_roots[i].pgd)) { - if (mmu->invlpg) - mmu->invlpg(vcpu, gva, mmu->prev_roots[i].hpa); - tlb_flush = true; - } + pcid == kvm_get_pcid(vcpu, mmu->prev_roots[i].pgd)) + roots_to_invalidate |= KVM_MMU_ROOT_PREVIOUS(i); } - if (tlb_flush) - static_call(kvm_x86_flush_tlb_gva)(vcpu, gva); - + if (roots_to_invalidate) + kvm_mmu_invalidate_gva(vcpu, mmu, gva, roots_to_invalidate); ++vcpu->stat.invlpg; /* From patchwork Thu Jan 5 09:58:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13089626 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B76B6C3DA7D for ; Thu, 5 Jan 2023 09:58:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231830AbjAEJ55 (ORCPT ); Thu, 5 Jan 2023 04:57:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231326AbjAEJ5t (ORCPT ); Thu, 5 Jan 2023 04:57:49 -0500 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5353C4FD53; Thu, 5 Jan 2023 01:57:48 -0800 (PST) Received: by mail-pf1-x42b.google.com with SMTP id 6so22202812pfz.4; Thu, 05 Jan 2023 01:57:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mFBlCEkkeMFXKNXjqBf5YmfR6phY6GSxunVKRwig/QY=; b=qRAsikolVWzMWN7+LKgvVdopfnEsPKnRla4wlEIwWMXiLVhPPSaCuWYyzqhBJ3Q5rR SHXw1KgrEYOi3ZRj1kCkr0OsV7+qgROvTxwKwToQJR5ULejrG4jUCJ6DxclqeZ24H58t Yid9T7RsihB4v9i0FSKAsK3I2p8zjNglHi7CjuzrB5lfRFpXlzc0NFQqfixLwCBsXrL1 8nhG2IO8q5BBvvjr7qIHd5wOq22xo4U0MRoz8wNrIBG+xqL52b2szmcnd0j2RKMZf0Op O7ftrzs2GpIkqqu5RgfXyJrv5QsWMnExWnFkFt8s7y7415Xo39eGrWWo8IX4nHdrwaOP 7eaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mFBlCEkkeMFXKNXjqBf5YmfR6phY6GSxunVKRwig/QY=; b=VRy959y7Q+mOiLtqA3HbaSO16LS3tEazUPHL5VnlQmtSoHJNOo2bmageoqylxNrUn9 84xAjQkFsbvGuMq9vLd6WkZTLV9kppvc4jKap1hSB1ORLjWFSqwfUw9bUvqVgko0cvOU SAjn08q9OBA+Qiulek07UPDzdraD7gVU/C9MGM5xkXRK2Dv6ZsILfF3w+/y4/W4q+S80 V6P1Xr6fufyg5e7u7rgj0/C32POyJjN9XWe4CWp7N/Ta4le55WvtRWH3zUcKq2Qampco Gnodo546HYX3qm1RjuagPvEiVYgpeHwDfPdE8aZmpYb28emFfis5njsxWBckEQObfRx+ 3M0Q== X-Gm-Message-State: AFqh2kpqbLr4J9ioJ3oZ/gRsZkNVhwuo/BeTZ8t5pc6a37nik1uOb3KP pt4lO4LUhFT8j4GAqU1SPjxdM+H+oxA= X-Google-Smtp-Source: AMrXdXuMNotVvt+JTSQiD8Eyv6IbYLiEV9hNLC/tpRlEcKzCqgm8DtLhLgzJtJyMz8hVvw6U0JavTA== X-Received: by 2002:a62:d45e:0:b0:56e:dca8:ba71 with SMTP id u30-20020a62d45e000000b0056edca8ba71mr43994278pfl.32.1672912667522; Thu, 05 Jan 2023 01:57:47 -0800 (PST) Received: from localhost ([47.89.225.180]) by smtp.gmail.com with ESMTPSA id p28-20020aa79e9c000000b00575caf8478dsm12208189pfq.41.2023.01.05.01.57.46 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Jan 2023 01:57:47 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH 3/7] kvm: x86/mmu: Use kvm_mmu_invalidate_gva() in nested_ept_invalidate_addr() Date: Thu, 5 Jan 2023 17:58:44 +0800 Message-Id: <20230105095848.6061-4-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230105095848.6061-1-jiangshanlai@gmail.com> References: <20230105095848.6061-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Use kvm_mmu_invalidate_gva() instead open calls to mmu->invlpg(). No functional change intended. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 1 + arch/x86/kvm/vmx/nested.c | 5 ++++- 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b0e7ac6d4e88..ffef9fe0c853 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5719,6 +5719,7 @@ void kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, VALID_PAGE(mmu->prev_roots[i].hpa)) mmu->invlpg(vcpu, gva, mmu->prev_roots[i].hpa); } +EXPORT_SYMBOL_GPL(kvm_mmu_invalidate_gva); void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva) { diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 557b9c468734..daf3138bafd1 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -358,6 +358,7 @@ static bool nested_ept_root_matches(hpa_t root_hpa, u64 root_eptp, u64 eptp) static void nested_ept_invalidate_addr(struct kvm_vcpu *vcpu, gpa_t eptp, gpa_t addr) { + ulong roots_to_invalidate = 0; uint i; struct kvm_mmu_root_info *cached_root; @@ -368,8 +369,10 @@ static void nested_ept_invalidate_addr(struct kvm_vcpu *vcpu, gpa_t eptp, if (nested_ept_root_matches(cached_root->hpa, cached_root->pgd, eptp)) - vcpu->arch.mmu->invlpg(vcpu, addr, cached_root->hpa); + roots_to_invalidate |= KVM_MMU_ROOT_PREVIOUS(i); } + if (roots_to_invalidate) + kvm_mmu_invalidate_gva(vcpu, vcpu->arch.mmu, addr, roots_to_invalidate); } static void nested_ept_inject_page_fault(struct kvm_vcpu *vcpu, From patchwork Thu Jan 5 09:58:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13089627 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75C2EC3DA7D for ; Thu, 5 Jan 2023 09:58:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232262AbjAEJ6A (ORCPT ); Thu, 5 Jan 2023 04:58:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232218AbjAEJ5x (ORCPT ); Thu, 5 Jan 2023 04:57:53 -0500 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF6654FD5D; Thu, 5 Jan 2023 01:57:52 -0800 (PST) Received: by mail-pj1-x102e.google.com with SMTP id cl14so2857280pjb.2; Thu, 05 Jan 2023 01:57:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jT6Gs3/9dHAVjZQVK0HTl12Ujux2L3UISr1bIxfWhlY=; b=BlhGxnSZaa8CbZNSOY077fMOr3vU/JlTd28fPoqQj9JZmRt3puXfRNIZXhOxRH2JcZ YTdDaE24l8kMb9hn2EzWVBdlJK8ZO+aHEIBFMTWSHBr7DcG9gwPTLRH65ufew25kDePe X/Uv3KnWn+s29Hk5fXRY4QIVNN7YaeMDjyuwv32+n3Iv6FG/IeRP3WlGK/t/15qYGt+D RWpJizzaziusak5833EXUTu0EGe4Q3QTH/v+xsKWYlAHIRkVO5l7mCaZof1w4UGYHnTL 3lRDVGrSv85xLj9ulzcSOwpcv4+RwCn89kyfO3KtuL9hYotQS2QoNZkLg7laCi5PpDcb gFGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jT6Gs3/9dHAVjZQVK0HTl12Ujux2L3UISr1bIxfWhlY=; b=mmDYfAvVTju+73skrpv7W+Pww1fik9/AkHnruUywFxOfEmUTjkIyEbMHVjteiC9m42 peId98uBB/yWJ48gOe+qmiwrwe83oggKypA306dF1+tHmW/8gsbtn+8wSq+C0IslTIL7 rR31UBUEfTqyw0XeIWkbBFTBBcCxCQ+cSV72PxU0I8CCqOZrSODYeEMhSOlaKtN+XZmg CFOKdCmxT7021vPzfEYvge4na40boeyU3q/DA7ABhau1axMCzBiQkqD4nd1ohmdchWP2 Wr1GAjBsWQnmOjbTd+xzv9xQ8frjqE+5BjfblnxibJPEMXdwk9UvFNjsNyeG3A6zjJWI dqew== X-Gm-Message-State: AFqh2kr4+mUj2n3jjL/yOeDTPBZ0Yml/8oAdBpHMBqBNYGPL6RLH1Zue SA5dCXD8sn8EK5T8+wT/494rIdYrFg0= X-Google-Smtp-Source: AMrXdXsIH3C3UU2l6lxFxaFRZXWzMX7j8nuH2iLVmIRvXHFqd8ADKQ7fqxSoCsfM4MlqRl7fLFZ08g== X-Received: by 2002:a17:902:ee13:b0:191:3c8:a7bd with SMTP id z19-20020a170902ee1300b0019103c8a7bdmr55989129plb.6.1672912672058; Thu, 05 Jan 2023 01:57:52 -0800 (PST) Received: from localhost ([47.254.32.37]) by smtp.gmail.com with ESMTPSA id b13-20020a170902650d00b0019300c89011sm693843plk.63.2023.01.05.01.57.51 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Jan 2023 01:57:51 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH 4/7] kvm: x86/mmu: Reduce the update to the spte in FNAME(sync_page) Date: Thu, 5 Jan 2023 17:58:45 +0800 Message-Id: <20230105095848.6061-5-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230105095848.6061-1-jiangshanlai@gmail.com> References: <20230105095848.6061-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Sometimes when the guest updates its pagetable, it adds only new gptes to it without changing any existed one, so there is no point to update the sptes for these existed gptes. Also when the sptes for these unchanged gptes are updated, the AD bits are also removed since make_spte() is called with prefetch=true which might result unneeded TLB flushing. Do nothing if the permissions are unchanged. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/paging_tmpl.h | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 11f17efbec97..ab0b031d4825 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -985,7 +985,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) for (i = 0; i < SPTE_ENT_PER_PAGE; i++) { u64 *sptep, spte; struct kvm_memory_slot *slot; - unsigned pte_access; + unsigned old_pte_access, pte_access; pt_element_t gpte; gpa_t pte_gpa; gfn_t gfn; @@ -1025,6 +1025,12 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) flush = true; continue; } + /* + * Do nothing if the permissions are unchanged. + */ + old_pte_access = kvm_mmu_page_get_access(sp, i); + if (old_pte_access == pte_access) + continue; /* Update the shadowed access bits in case they changed. */ kvm_mmu_page_set_access(sp, i, pte_access); From patchwork Thu Jan 5 09:58:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13089629 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0DDDC3DA7A for ; Thu, 5 Jan 2023 09:58:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232267AbjAEJ6Z (ORCPT ); Thu, 5 Jan 2023 04:58:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54772 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230282AbjAEJ56 (ORCPT ); Thu, 5 Jan 2023 04:57:58 -0500 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A504D551D8; Thu, 5 Jan 2023 01:57:57 -0800 (PST) Received: by mail-pj1-x1033.google.com with SMTP id fz16-20020a17090b025000b002269d6c2d83so3272653pjb.0; Thu, 05 Jan 2023 01:57:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1FxfyRYzq+eJDXF+P17kPC8+b8A7MrnUUAaDfTct5VU=; b=c5GcwudmhpoazZWztrEH2AEr39+EQS2yTtDETqUSp5YBTOO8Xv9YcBkcdlxpYn//XH gZNt7GRQusRmr+8p+XVo317JhbR9YfumYrISabC2oMQ3Hx9FQ9/CFGYmnWK1vildiHjh OMJM592J6ns0GQ3K1OdqYKfTuCK/6PGIc2h19S0L7ZT74cMLraKJyhLZNFCYd8V7A9/K G1b+ycOzcnv+TO6BmbxzKX39tvPrngrq2DYUU3+j5WgySbJQIhufLVPedCF9WEkTEPPJ zjYm/n/qR7f22KzND2SQC5Fgk+9s551QqVBoa+h1qpjwSad55uS+A2o4DDmzbPeYfO2v v20A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1FxfyRYzq+eJDXF+P17kPC8+b8A7MrnUUAaDfTct5VU=; b=33yDQsg8iwrmYX8ChV93rOXogx1CGrBiBEvyizeDntYwG7L+PRO0kYmeQZX+0rrOi3 +QgzOHZKxMxcfpfej3lF77ae+DMJ9ncWJLZ8e8A425LWjmIlbyNcw0crd5x2eUeHelsr uHc6tSeyl1RlCaGWvdvK9bKj05IjnzAni90+Gdek6s0II51o0KjHTp540I+0Ss5TGPDQ DPYmnhgOoJ27y8h314EqLaMqWtR2bJonYc2HGy6JFi+Jvh+S3VSRa4/DacFAe/Dyxbwr E0w6nADCiNmippudBU7za4LCqqeleFaC8kPyIwKD7tu0+CVYYK4zLoaSDK57y03oF4vI +iuA== X-Gm-Message-State: AFqh2kqPxj3Vp67zCCGhvwr4dlYC/8b6X9LX9FbmjhWfu6Y+mO+aCZny QeDrbisdjSlkDR9z+7IiBtqJJXYikls= X-Google-Smtp-Source: AMrXdXtjczeH3ed9iss5bkxv5zBVFXbHfAPc+fXsYPfxoSFcHtAAi5GDivuZrEGi1kSBqn37IIq+EQ== X-Received: by 2002:a05:6a21:e385:b0:b3:4044:1503 with SMTP id cc5-20020a056a21e38500b000b340441503mr45652790pzc.52.1672912676725; Thu, 05 Jan 2023 01:57:56 -0800 (PST) Received: from localhost ([47.88.5.130]) by smtp.gmail.com with ESMTPSA id o11-20020a17090a678b00b0022698aa22d9sm995245pjj.31.2023.01.05.01.57.55 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Jan 2023 01:57:56 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH 5/7] kvm: x86/mmu: Move the code out of FNAME(sync_page)'s loop body into mmu.c Date: Thu, 5 Jan 2023 17:58:46 +0800 Message-Id: <20230105095848.6061-6-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230105095848.6061-1-jiangshanlai@gmail.com> References: <20230105095848.6061-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Rename mmu->sync_page to mmu->sync_spte and move the code out of FNAME(sync_page)'s loop body into mmu.c. Also initialize mmu->sync_spte as NULL for direct paging. No functionalities change intended. Signed-off-by: Lai Jiangshan --- arch/x86/include/asm/kvm_host.h | 4 +- arch/x86/kvm/mmu/mmu.c | 70 ++++++++++++--- arch/x86/kvm/mmu/paging_tmpl.h | 147 +++++++++++--------------------- 3 files changed, 110 insertions(+), 111 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index dbea616bccce..69b7967cd743 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -441,8 +441,8 @@ struct kvm_mmu { gpa_t (*gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, gpa_t gva_or_gpa, u64 access, struct x86_exception *exception); - int (*sync_page)(struct kvm_vcpu *vcpu, - struct kvm_mmu_page *sp); + int (*sync_spte)(struct kvm_vcpu *vcpu, + struct kvm_mmu_page *sp, int i); void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa); struct kvm_mmu_root_info root; union kvm_cpu_role cpu_role; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ffef9fe0c853..f39bee1542d8 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1779,12 +1779,6 @@ static void mark_unsync(u64 *spte) kvm_mmu_mark_parents_unsync(sp); } -static int nonpaging_sync_page(struct kvm_vcpu *vcpu, - struct kvm_mmu_page *sp) -{ - return -1; -} - #define KVM_PAGE_ARRAY_NR 16 struct kvm_mmu_pages { @@ -1904,10 +1898,62 @@ static bool sp_has_gptes(struct kvm_mmu_page *sp) &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)]) \ if ((_sp)->gfn != (_gfn) || !sp_has_gptes(_sp)) {} else +static int __kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) +{ + union kvm_mmu_page_role root_role = vcpu->arch.mmu->root_role; + bool flush = false; + int i; + + /* + * Ignore various flags when verifying that it's safe to sync a shadow + * page using the current MMU context. + * + * - level: not part of the overall MMU role and will never match as the MMU's + * level tracks the root level + * - access: updated based on the new guest PTE + * - quadrant: not part of the overall MMU role (similar to level) + */ + const union kvm_mmu_page_role sync_role_ign = { + .level = 0xf, + .access = 0x7, + .quadrant = 0x3, + .passthrough = 0x1, + }; + + /* + * Direct pages can never be unsync, and KVM should never attempt to + * sync a shadow page for a different MMU context, e.g. if the role + * differs then the memslot lookup (SMM vs. non-SMM) will be bogus, the + * reserved bits checks will be wrong, etc... + */ + if (WARN_ON_ONCE(sp->role.direct || + (sp->role.word ^ root_role.word) & ~sync_role_ign.word)) + return -1; + + for (i = 0; i < SPTE_ENT_PER_PAGE; i++) { + int ret = vcpu->arch.mmu->sync_spte(vcpu, sp, i); + + if (ret < -1) + return -1; + flush |= ret; + } + + /* + * Note, any flush is purely for KVM's correctness, e.g. when dropping + * an existing SPTE or clearing W/A/D bits to ensure an mmu_notifier + * unmap or dirty logging event doesn't fail to flush. The guest is + * responsible for flushing the TLB to ensure any changes in protection + * bits are recognized, i.e. until the guest flushes or page faults on + * a relevant address, KVM is architecturally allowed to let vCPUs use + * cached translations with the old protection bits. + */ + return flush; +} + static int kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct list_head *invalid_list) { - int ret = vcpu->arch.mmu->sync_page(vcpu, sp); + int ret = __kvm_sync_page(vcpu, sp); if (ret < 0) kvm_mmu_prepare_zap_page(vcpu->kvm, sp, invalid_list); @@ -4458,7 +4504,7 @@ static void nonpaging_init_context(struct kvm_mmu *context) { context->page_fault = nonpaging_page_fault; context->gva_to_gpa = nonpaging_gva_to_gpa; - context->sync_page = nonpaging_sync_page; + context->sync_spte = NULL; context->invlpg = NULL; } @@ -5047,7 +5093,7 @@ static void paging64_init_context(struct kvm_mmu *context) { context->page_fault = paging64_page_fault; context->gva_to_gpa = paging64_gva_to_gpa; - context->sync_page = paging64_sync_page; + context->sync_spte = paging64_sync_spte; context->invlpg = paging64_invlpg; } @@ -5055,7 +5101,7 @@ static void paging32_init_context(struct kvm_mmu *context) { context->page_fault = paging32_page_fault; context->gva_to_gpa = paging32_gva_to_gpa; - context->sync_page = paging32_sync_page; + context->sync_spte = paging32_sync_spte; context->invlpg = paging32_invlpg; } @@ -5144,7 +5190,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, context->cpu_role.as_u64 = cpu_role.as_u64; context->root_role.word = root_role.word; context->page_fault = kvm_tdp_page_fault; - context->sync_page = nonpaging_sync_page; + context->sync_spte = NULL; context->invlpg = NULL; context->get_guest_pgd = get_cr3; context->get_pdptr = kvm_pdptr_read; @@ -5276,7 +5322,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, context->page_fault = ept_page_fault; context->gva_to_gpa = ept_gva_to_gpa; - context->sync_page = ept_sync_page; + context->sync_spte = ept_sync_spte; context->invlpg = ept_invlpg; update_permission_bitmask(context, true); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index ab0b031d4825..3bc13b9b61d1 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -942,120 +942,73 @@ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, * can't change unless all sptes pointing to it are nuked first. * * Returns - * < 0: the sp should be zapped + * < 0: failed to sync * 0: the sp is synced and no tlb flushing is required * > 0: the sp is synced and tlb flushing is required */ -static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) +static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, int i) { - union kvm_mmu_page_role root_role = vcpu->arch.mmu->root_role; - int i; bool host_writable; gpa_t first_pte_gpa; - bool flush = false; - - /* - * Ignore various flags when verifying that it's safe to sync a shadow - * page using the current MMU context. - * - * - level: not part of the overall MMU role and will never match as the MMU's - * level tracks the root level - * - access: updated based on the new guest PTE - * - quadrant: not part of the overall MMU role (similar to level) - */ - const union kvm_mmu_page_role sync_role_ign = { - .level = 0xf, - .access = 0x7, - .quadrant = 0x3, - .passthrough = 0x1, - }; + u64 *sptep, spte; + struct kvm_memory_slot *slot; + unsigned old_pte_access, pte_access; + pt_element_t gpte; + gpa_t pte_gpa; + gfn_t gfn; - /* - * Direct pages can never be unsync, and KVM should never attempt to - * sync a shadow page for a different MMU context, e.g. if the role - * differs then the memslot lookup (SMM vs. non-SMM) will be bogus, the - * reserved bits checks will be wrong, etc... - */ - if (WARN_ON_ONCE(sp->role.direct || - (sp->role.word ^ root_role.word) & ~sync_role_ign.word)) - return -1; + if (!sp->spt[i]) + return 0; first_pte_gpa = FNAME(get_level1_sp_gpa)(sp); + pte_gpa = first_pte_gpa + i * sizeof(pt_element_t); - for (i = 0; i < SPTE_ENT_PER_PAGE; i++) { - u64 *sptep, spte; - struct kvm_memory_slot *slot; - unsigned old_pte_access, pte_access; - pt_element_t gpte; - gpa_t pte_gpa; - gfn_t gfn; - - if (!sp->spt[i]) - continue; - - pte_gpa = first_pte_gpa + i * sizeof(pt_element_t); - - if (kvm_vcpu_read_guest_atomic(vcpu, pte_gpa, &gpte, - sizeof(pt_element_t))) - return -1; - - if (FNAME(prefetch_invalid_gpte)(vcpu, sp, &sp->spt[i], gpte)) { - flush = true; - continue; - } - - gfn = gpte_to_gfn(gpte); - pte_access = sp->role.access; - pte_access &= FNAME(gpte_access)(gpte); - FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte); - - if (sync_mmio_spte(vcpu, &sp->spt[i], gfn, pte_access)) - continue; + if (kvm_vcpu_read_guest_atomic(vcpu, pte_gpa, &gpte, + sizeof(pt_element_t))) + return -1; - /* - * Drop the SPTE if the new protections would result in a RWX=0 - * SPTE or if the gfn is changing. The RWX=0 case only affects - * EPT with execute-only support, i.e. EPT without an effective - * "present" bit, as all other paging modes will create a - * read-only SPTE if pte_access is zero. - */ - if ((!pte_access && !shadow_present_mask) || - gfn != kvm_mmu_page_get_gfn(sp, i)) { - drop_spte(vcpu->kvm, &sp->spt[i]); - flush = true; - continue; - } - /* - * Do nothing if the permissions are unchanged. - */ - old_pte_access = kvm_mmu_page_get_access(sp, i); - if (old_pte_access == pte_access) - continue; + if (FNAME(prefetch_invalid_gpte)(vcpu, sp, &sp->spt[i], gpte)) + return 1; - /* Update the shadowed access bits in case they changed. */ - kvm_mmu_page_set_access(sp, i, pte_access); + gfn = gpte_to_gfn(gpte); + pte_access = sp->role.access; + pte_access &= FNAME(gpte_access)(gpte); + FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte); - sptep = &sp->spt[i]; - spte = *sptep; - host_writable = spte & shadow_host_writable_mask; - slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - make_spte(vcpu, sp, slot, pte_access, gfn, - spte_to_pfn(spte), spte, true, false, - host_writable, &spte); + if (sync_mmio_spte(vcpu, &sp->spt[i], gfn, pte_access)) + return 0; - flush |= mmu_spte_update(sptep, spte); + /* + * Drop the SPTE if the new protections would result in a RWX=0 + * SPTE or if the gfn is changing. The RWX=0 case only affects + * EPT with execute-only support, i.e. EPT without an effective + * "present" bit, as all other paging modes will create a + * read-only SPTE if pte_access is zero. + */ + if ((!pte_access && !shadow_present_mask) || + gfn != kvm_mmu_page_get_gfn(sp, i)) { + drop_spte(vcpu->kvm, &sp->spt[i]); + return 1; } - /* - * Note, any flush is purely for KVM's correctness, e.g. when dropping - * an existing SPTE or clearing W/A/D bits to ensure an mmu_notifier - * unmap or dirty logging event doesn't fail to flush. The guest is - * responsible for flushing the TLB to ensure any changes in protection - * bits are recognized, i.e. until the guest flushes or page faults on - * a relevant address, KVM is architecturally allowed to let vCPUs use - * cached translations with the old protection bits. + * Do nothing if the permissions are unchanged. */ - return flush; + old_pte_access = kvm_mmu_page_get_access(sp, i); + if (old_pte_access == pte_access) + return 0; + + /* Update the shadowed access bits in case they changed. */ + kvm_mmu_page_set_access(sp, i, pte_access); + + sptep = &sp->spt[i]; + spte = *sptep; + host_writable = spte & shadow_host_writable_mask; + slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); + make_spte(vcpu, sp, slot, pte_access, gfn, + spte_to_pfn(spte), spte, true, false, + host_writable, &spte); + + return mmu_spte_update(sptep, spte); } #undef pt_element_t From patchwork Thu Jan 5 09:58:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13089628 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9294C3DA7D for ; Thu, 5 Jan 2023 09:58:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232498AbjAEJ6j (ORCPT ); Thu, 5 Jan 2023 04:58:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232284AbjAEJ6D (ORCPT ); Thu, 5 Jan 2023 04:58:03 -0500 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3504B4FD5D; Thu, 5 Jan 2023 01:58:02 -0800 (PST) Received: by mail-pj1-x102f.google.com with SMTP id v23so39321041pju.3; Thu, 05 Jan 2023 01:58:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xooBmoFIJz8fPGLDNUh+0vSduR7yUeIqQpdajeiQf9s=; b=bZoPpF30y1Ot8G/sAAV52wi99QXPiohVC+XteLpF9qas44F8VFP2KMG+GUweX3vBA5 Ju6GMqxVDKB20IheDiK06F49xNrE+NiL+0vIx2ew5GJFYFdbXKqSwqhoFUzrvM2n3cuF xR13DzhLupNfTCattVj+pMWmgALou/OBu5O0XMIqcrp5NKEptizI86J9yRw4wZrorESA g5aSwjNg585DKWZnd3vFH3/BzaOFlr7wm0jnPeNeqsbhFD+M2VFk3uStX1NL8jq1GtpQ h0RpOUV730H5BbRQatsT/8MOzxXVDEpE471HTCah61jMyhiY3ALnzK6tbFAmZl3WMgoe GCug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xooBmoFIJz8fPGLDNUh+0vSduR7yUeIqQpdajeiQf9s=; b=cX4n2VT/UK8VkPKLkW21r6LSlmiL0hAel6/Hr46b57UUw5JK62gJMLP3qztzCMDNBN F150loVq8W4SpatMpM+//Y6IzHiosiWPcW9CXL0PysBj8eEOQHU/M+bH0KawqCmKX10F EVJA/ufBXpeMB58/TsJucbur4C/t2dE9sjJAcVDzq5fBR9fFSqpfBPTVM0/8PoQm9xDQ zE/CXgJCaly3y0opIv/Rr0JnaaZSfvJ3AzcrmAtgECiNlGZ3OMaigep7aWHUzse1Ngz1 ffNKgc2W7NCIQidOORsgl7/TKcB+8BhS/KbTV5SA17JyD5lEuaU5J5iWqOyt0PF1+vig 2CLQ== X-Gm-Message-State: AFqh2koc6H39QAn2u4+FccEnwaCG3J+2dJz49+khZ4/GMiTf8ji6qnBT XMj0oTcg8uXoumNpg8IzHDE8oF4VbfU= X-Google-Smtp-Source: AMrXdXsvnCw1RJAEDpVR8qvEBXpmg93qx0yB+REAWGcRLJQQHhhFwH4lDUx9vNfZ5IZsfcTGncesnQ== X-Received: by 2002:a17:902:8e8b:b0:192:e4cf:ca64 with SMTP id bg11-20020a1709028e8b00b00192e4cfca64mr7038155plb.28.1672912681423; Thu, 05 Jan 2023 01:58:01 -0800 (PST) Received: from localhost ([47.254.32.37]) by smtp.gmail.com with ESMTPSA id y7-20020a17090322c700b00192aa53a7d5sm12523926plg.8.2023.01.05.01.58.00 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Jan 2023 01:58:00 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH 6/7] kvm: x86/mmu: Remove FNAME(invlpg) Date: Thu, 5 Jan 2023 17:58:47 +0800 Message-Id: <20230105095848.6061-7-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230105095848.6061-1-jiangshanlai@gmail.com> References: <20230105095848.6061-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Replace it with FNAME(sync_spte). FNAME(sync_spte) combined with the shadow pagetable walk meets the semantics of the instruction INVLPG. Using FNAME(sync_spte) can share the code with flushing vTLB (kvm_sync_page()) on invalidating each vTLB entry. Signed-off-by: Lai Jiangshan --- arch/x86/include/asm/kvm_host.h | 1 - arch/x86/kvm/mmu/mmu.c | 48 +++++++++++++++++---------- arch/x86/kvm/mmu/paging_tmpl.h | 59 --------------------------------- 3 files changed, 31 insertions(+), 77 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 69b7967cd743..b80de8f53130 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -443,7 +443,6 @@ struct kvm_mmu { struct x86_exception *exception); int (*sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, int i); - void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa); struct kvm_mmu_root_info root; union kvm_cpu_role cpu_role; union kvm_mmu_page_role root_role; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f39bee1542d8..1e5f2e79863f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1061,14 +1061,6 @@ static struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, return &slot->arch.rmap[level - PG_LEVEL_4K][idx]; } -static bool rmap_can_add(struct kvm_vcpu *vcpu) -{ - struct kvm_mmu_memory_cache *mc; - - mc = &vcpu->arch.mmu_pte_list_desc_cache; - return kvm_mmu_memory_cache_nr_free_objects(mc); -} - static void rmap_remove(struct kvm *kvm, u64 *spte) { struct kvm_memslots *slots; @@ -4505,7 +4497,6 @@ static void nonpaging_init_context(struct kvm_mmu *context) context->page_fault = nonpaging_page_fault; context->gva_to_gpa = nonpaging_gva_to_gpa; context->sync_spte = NULL; - context->invlpg = NULL; } static inline bool is_root_usable(struct kvm_mmu_root_info *root, gpa_t pgd, @@ -5094,7 +5085,6 @@ static void paging64_init_context(struct kvm_mmu *context) context->page_fault = paging64_page_fault; context->gva_to_gpa = paging64_gva_to_gpa; context->sync_spte = paging64_sync_spte; - context->invlpg = paging64_invlpg; } static void paging32_init_context(struct kvm_mmu *context) @@ -5102,7 +5092,6 @@ static void paging32_init_context(struct kvm_mmu *context) context->page_fault = paging32_page_fault; context->gva_to_gpa = paging32_gva_to_gpa; context->sync_spte = paging32_sync_spte; - context->invlpg = paging32_invlpg; } static union kvm_cpu_role @@ -5191,7 +5180,6 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, context->root_role.word = root_role.word; context->page_fault = kvm_tdp_page_fault; context->sync_spte = NULL; - context->invlpg = NULL; context->get_guest_pgd = get_cr3; context->get_pdptr = kvm_pdptr_read; context->inject_page_fault = kvm_inject_page_fault; @@ -5323,7 +5311,6 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, context->page_fault = ept_page_fault; context->gva_to_gpa = ept_gva_to_gpa; context->sync_spte = ept_sync_spte; - context->invlpg = ept_invlpg; update_permission_bitmask(context, true); context->pkru_mask = 0; @@ -5364,7 +5351,7 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu, * L2 page tables are never shadowed, so there is no need to sync * SPTEs. */ - g_context->invlpg = NULL; + g_context->sync_spte = NULL; /* * Note that arch.mmu->gva_to_gpa translates l2_gpa to l1_gpa using @@ -5739,6 +5726,33 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err } EXPORT_SYMBOL_GPL(kvm_mmu_page_fault); +static void __kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, + gva_t gva, hpa_t root_hpa) +{ + struct kvm_shadow_walk_iterator iterator; + + vcpu_clear_mmio_info(vcpu, gva); + + write_lock(&vcpu->kvm->mmu_lock); + for_each_shadow_entry_using_root(vcpu, root_hpa, gva, iterator) { + struct kvm_mmu_page *sp = sptep_to_sp(iterator.sptep); + + if (sp->unsync && *iterator.sptep) { + gfn_t gfn = kvm_mmu_page_get_gfn(sp, iterator.index); + int ret = mmu->sync_spte(vcpu, sp, iterator.index); + + if (ret < 0) + mmu_page_zap_pte(vcpu->kvm, sp, iterator.sptep, NULL); + if (ret) + kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, 1); + } + + if (!sp->unsync_children) + break; + } + write_unlock(&vcpu->kvm->mmu_lock); +} + /* roots_to_invalidte must be some combination of the KVM_MMU_ROOT_* flags */ void kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, gva_t gva, ulong roots_to_invalidate) @@ -5754,16 +5768,16 @@ void kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, static_call(kvm_x86_flush_tlb_gva)(vcpu, gva); } - if (!mmu->invlpg) + if (!mmu->sync_spte) return; if ((roots_to_invalidate & KVM_MMU_ROOT_CURRENT) && VALID_PAGE(mmu->root.hpa)) - mmu->invlpg(vcpu, gva, mmu->root.hpa); + __kvm_mmu_invalidate_gva(vcpu, mmu, gva, mmu->root.hpa); for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) if ((roots_to_invalidate & KVM_MMU_ROOT_PREVIOUS(i)) && VALID_PAGE(mmu->prev_roots[i].hpa)) - mmu->invlpg(vcpu, gva, mmu->prev_roots[i].hpa); + __kvm_mmu_invalidate_gva(vcpu, mmu, gva, mmu->prev_roots[i].hpa); } EXPORT_SYMBOL_GPL(kvm_mmu_invalidate_gva); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 3bc13b9b61d1..62aac5d7d38c 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -851,65 +851,6 @@ static gpa_t FNAME(get_level1_sp_gpa)(struct kvm_mmu_page *sp) return gfn_to_gpa(sp->gfn) + offset * sizeof(pt_element_t); } -static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) -{ - struct kvm_shadow_walk_iterator iterator; - struct kvm_mmu_page *sp; - u64 old_spte; - int level; - u64 *sptep; - - vcpu_clear_mmio_info(vcpu, gva); - - /* - * No need to check return value here, rmap_can_add() can - * help us to skip pte prefetch later. - */ - mmu_topup_memory_caches(vcpu, true); - - if (!VALID_PAGE(root_hpa)) { - WARN_ON(1); - return; - } - - write_lock(&vcpu->kvm->mmu_lock); - for_each_shadow_entry_using_root(vcpu, root_hpa, gva, iterator) { - level = iterator.level; - sptep = iterator.sptep; - - sp = sptep_to_sp(sptep); - old_spte = *sptep; - if (is_last_spte(old_spte, level)) { - pt_element_t gpte; - gpa_t pte_gpa; - - if (!sp->unsync) - break; - - pte_gpa = FNAME(get_level1_sp_gpa)(sp); - pte_gpa += spte_index(sptep) * sizeof(pt_element_t); - - mmu_page_zap_pte(vcpu->kvm, sp, sptep, NULL); - if (is_shadow_present_pte(old_spte)) - kvm_flush_remote_tlbs_with_address(vcpu->kvm, - sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); - - if (!rmap_can_add(vcpu)) - break; - - if (kvm_vcpu_read_guest_atomic(vcpu, pte_gpa, &gpte, - sizeof(pt_element_t))) - break; - - FNAME(prefetch_gpte)(vcpu, sp, sptep, gpte, false); - } - - if (!sp->unsync_children) - break; - } - write_unlock(&vcpu->kvm->mmu_lock); -} - /* Note, @addr is a GPA when gva_to_gpa() translates an L2 GPA to an L1 GPA. */ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, gpa_t addr, u64 access, From patchwork Thu Jan 5 09:58:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 13089630 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E55A5C54E76 for ; Thu, 5 Jan 2023 09:58:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229563AbjAEJ6l (ORCPT ); Thu, 5 Jan 2023 04:58:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54874 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232311AbjAEJ6H (ORCPT ); Thu, 5 Jan 2023 04:58:07 -0500 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A0A65017A; Thu, 5 Jan 2023 01:58:06 -0800 (PST) Received: by mail-pj1-x102f.google.com with SMTP id c8-20020a17090a4d0800b00225c3614161so1502309pjg.5; Thu, 05 Jan 2023 01:58:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qA9L3HAkjkk/belL4oNcpN0xg0KHFKgsF2/LCYlKyHk=; b=mCSvXwVqp3va7cuGh/Xm0kRBByIew6A9DS/Q7Sln/ri7qpvF01Mn0miRWwaNpru8t8 hkGNhMUiAAuS+sSmADq1d29akmHpFeM4AnEopl3IO5nFV15iMj2fWc4yq2qYlBlfRDeH 3cFtHntSU6JBcVIynGUJDdGfnVP1FjGwW4kL/3ARr4mZkeLHRDaZKIFxtbCz4d8w9sYH CChsNzsBG9MwIAjonXxqbu8ngpLvV2XxS+M0S7emsQzu0PORim4CICREqKyC3T/B/jna 8OQAuW0lBWq7eA9Eb8Yflv9CHyvxkCszyG1zLvcxs1PXiUU6kDxhvCG+pbkTZzRu9hLh Qc1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qA9L3HAkjkk/belL4oNcpN0xg0KHFKgsF2/LCYlKyHk=; b=fyWHGowgPvZTvkchyQ4p87JQ8/jXBUOscG60Em02ptQXKBPODpDyGxiALCYfydUXSu GRPzGQfPou5a4bOYXpJnrqm7ozR2/lLdN0Y+d7Ya5gs+4OVvZ1SPCPNBPB2kZRdkyJRX C+noNSgGg+XhzwZgIAG2dNqJtu8CkxbHwLRaIjTLWlwHqQS2Rn6pPhuw5u0B61gIfbcJ hFUDQ3piG97b1TN2uV+a0Ben4TNJ67/Z4scF3DS5xoWXEl01juelSUBis3YlgbQkuk4e 51FWkmiOi2t+NQfysK4AFUyg60/eDzSMWlT3r2MLW0jZAhLBLkJ+/wglts8ShWoT7zXY onkA== X-Gm-Message-State: AFqh2kq0Esn4wMnjCcNJ20wHE6uv2czSgFWKOdY7y6jjLRE3Kt1u1obY N0TGkQpVo9+2bVGjQXhASXSIm1d6IoQ= X-Google-Smtp-Source: AMrXdXvJEZeuKTe83nfCXPEZ5ZvG3fkEosCwGlFSD5uo7L6HCSJ2lz6sl6/L7rjtlp/uzcjZkYmBNg== X-Received: by 2002:a05:6a20:3aa3:b0:b1:dcf1:6996 with SMTP id d35-20020a056a203aa300b000b1dcf16996mr52483351pzh.54.1672912685802; Thu, 05 Jan 2023 01:58:05 -0800 (PST) Received: from localhost ([198.11.176.14]) by smtp.gmail.com with ESMTPSA id y17-20020a626411000000b00575b6d7c458sm23980624pfb.21.2023.01.05.01.58.04 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Jan 2023 01:58:05 -0800 (PST) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Lai Jiangshan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH 7/7] kvm: x86/mmu: Remove @no_dirty_log from FNAME(prefetch_gpte) Date: Thu, 5 Jan 2023 17:58:48 +0800 Message-Id: <20230105095848.6061-8-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230105095848.6061-1-jiangshanlai@gmail.com> References: <20230105095848.6061-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan FNAME(prefetch_gpte) is always called with @no_dirty_log=true. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/paging_tmpl.h | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 62aac5d7d38c..2db844d5d33c 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -519,7 +519,7 @@ static int FNAME(walk_addr)(struct guest_walker *walker, static bool FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, - u64 *spte, pt_element_t gpte, bool no_dirty_log) + u64 *spte, pt_element_t gpte) { struct kvm_memory_slot *slot; unsigned pte_access; @@ -535,8 +535,7 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, pte_access = sp->role.access & FNAME(gpte_access)(gpte); FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte); - slot = gfn_to_memslot_dirty_bitmap(vcpu, gfn, - no_dirty_log && (pte_access & ACC_WRITE_MASK)); + slot = gfn_to_memslot_dirty_bitmap(vcpu, gfn, pte_access & ACC_WRITE_MASK); if (!slot) return false; @@ -605,7 +604,7 @@ static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, struct guest_walker *gw, if (is_shadow_present_pte(*spte)) continue; - if (!FNAME(prefetch_gpte)(vcpu, sp, spte, gptep[i], true)) + if (!FNAME(prefetch_gpte)(vcpu, sp, spte, gptep[i])) break; } }