From patchwork Thu Jan 19 17:35:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13108428 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42995C46467 for ; Thu, 19 Jan 2023 17:36:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230135AbjASRgO (ORCPT ); Thu, 19 Jan 2023 12:36:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230123AbjASRgO (ORCPT ); Thu, 19 Jan 2023 12:36:14 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4A98974EAC for ; Thu, 19 Jan 2023 09:36:12 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id u3-20020a056a00124300b0056d4ab0c7cbso1217538pfi.7 for ; Thu, 19 Jan 2023 09:36:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Im+iiLjR8kItTyPOE/eTt5JfrQ6vH6VP6QUOOm0Kk1I=; b=NDaypEfM14bF96rZQ8+HhBj+hk7vULL62aBgTYqOMJZbA/TFlcBoOs9sPPdVNfYfNe hy+wfSSH4BDSz1WThBSYqt2sAf1ehWletnOkE7HUQytb+gI9Re8OgzkWS2mo1I6jEgoP A5YK5ZDo9WyQsEjxWBT4rvnxMYwhUntLdJ3c9JnoqQ/mZ7YKkBJREatylJnm/6q91ADW UAGK49Ncp5vq94p02BqMu7KO0bKoiDBTYjkeh1wCPeC5Jistb1klwpTxOTnWW3mROQTd HXspEOaka5MwYa6n+2RX9A6Q2UndlbAAikglk4E+3jUfk4fiFTsd+YClS+rJmA2nHT4G mg5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Im+iiLjR8kItTyPOE/eTt5JfrQ6vH6VP6QUOOm0Kk1I=; b=QL4cZ4RZc8p5qFDppgr/tI7fDikcUZ53eRYkflEuqVr0iZ5a5sLjOblVy1PQtnWtth 4XkUh4gKFbl+As4prC8XrcLb0HNoLjVs4jlWaYWWII3AYJno57wGe6S9+k1t+9yKTXb8 6ImKY1exUm71XtoT182+Bkl/RUt2inNZZWwj5uFZECE7c2uOR3S+IxocmiQ43wYzdiUY QrJBXdpvLVy4Hc+ICQ7ZASJkutKeKe0Lb7Zf5sAbANbx4lHMMViDZkNlpFHDOHAiOo0O Jj/5Hc2kNvQKiRVPGpxVcKihMM/laRLR7V1r7vvwkfF7y+t05yMFunDWKqq627x/t9vs e0Wg== X-Gm-Message-State: AFqh2krGP84sSJnU2vn4qZeS45xqIqyIFs3Urjs9ulxeT/3WFoGSqJW0 PWl+Vo81VVc+rjVfT2G7NgMuhivctDt/Lw== X-Google-Smtp-Source: AMrXdXsA1M5z0iLozIPBN4P/zxpeV5zGDay7QgahgmS5l1uDJTAuZVCtHiq4az1VYA3Tk0bmeBp7KsSDRfSGJg== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a17:902:f24a:b0:192:ed88:bc8f with SMTP id j10-20020a170902f24a00b00192ed88bc8fmr1232044plc.25.1674149771693; Thu, 19 Jan 2023 09:36:11 -0800 (PST) Date: Thu, 19 Jan 2023 09:35:53 -0800 In-Reply-To: <20230119173559.2517103-1-dmatlack@google.com> Mime-Version: 1.0 References: <20230119173559.2517103-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog Message-ID: <20230119173559.2517103-2-dmatlack@google.com> Subject: [PATCH 1/7] KVM: Rename kvm_arch_flush_remote_tlb() to kvm_arch_flush_remote_tlbs() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, David Matlack , Raghavendra Rao Ananta Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org Rename kvm_arch_flush_remote_tlb() and the associated macro __KVM_HAVE_ARCH_FLUSH_REMOTE_TLB to kvm_arch_flush_remote_tlbs() and __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS respectively. Making the name plural matches kvm_flush_remote_tlbs() and makes it more clear that this function can affect more than one remote TLB. No functional change intended. Signed-off-by: David Matlack --- arch/mips/include/asm/kvm_host.h | 4 ++-- arch/mips/kvm/mips.c | 2 +- arch/x86/include/asm/kvm_host.h | 4 ++-- include/linux/kvm_host.h | 4 ++-- virt/kvm/kvm_main.c | 2 +- 5 files changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h index 2803c9c21ef9..849eb482ad15 100644 --- a/arch/mips/include/asm/kvm_host.h +++ b/arch/mips/include/asm/kvm_host.h @@ -896,7 +896,7 @@ static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {} -#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLB -int kvm_arch_flush_remote_tlb(struct kvm *kvm); +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS +int kvm_arch_flush_remote_tlbs(struct kvm *kvm); #endif /* __MIPS_KVM_HOST_H__ */ diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index 36c8991b5d39..2e54e5fd8daa 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -981,7 +981,7 @@ void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) } -int kvm_arch_flush_remote_tlb(struct kvm *kvm) +int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { kvm_mips_callbacks->prepare_flush_shadow(kvm); return 1; diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 4d2bc08794e4..1bacc3de2432 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1789,8 +1789,8 @@ static inline struct kvm *kvm_arch_alloc_vm(void) #define __KVM_HAVE_ARCH_VM_FREE void kvm_arch_free_vm(struct kvm *kvm); -#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLB -static inline int kvm_arch_flush_remote_tlb(struct kvm *kvm) +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS +static inline int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { if (kvm_x86_ops.tlb_remote_flush && !static_call(kvm_x86_tlb_remote_flush)(kvm)) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 109b18e2789c..76711afe4d17 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1477,8 +1477,8 @@ static inline void kvm_arch_free_vm(struct kvm *kvm) } #endif -#ifndef __KVM_HAVE_ARCH_FLUSH_REMOTE_TLB -static inline int kvm_arch_flush_remote_tlb(struct kvm *kvm) +#ifndef __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS +static inline int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { return -ENOTSUPP; } diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d255964ec331..277507463678 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -363,7 +363,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) * kvm_make_all_cpus_request() reads vcpu->mode. We reuse that * barrier here. */ - if (!kvm_arch_flush_remote_tlb(kvm) + if (!kvm_arch_flush_remote_tlbs(kvm) || kvm_make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH)) ++kvm->stat.generic.remote_tlb_flush; } From patchwork Thu Jan 19 17:35:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13108429 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06B3FC678D6 for ; Thu, 19 Jan 2023 17:36:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230166AbjASRgS (ORCPT ); Thu, 19 Jan 2023 12:36:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43772 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230154AbjASRgP (ORCPT ); Thu, 19 Jan 2023 12:36:15 -0500 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A56C829A3 for ; Thu, 19 Jan 2023 09:36:14 -0800 (PST) Received: by mail-pg1-x549.google.com with SMTP id g31-20020a63111f000000b004bbc748ca63so1318572pgl.3 for ; Thu, 19 Jan 2023 09:36:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=GgujeBTGY+XPYs5ne3mTkJokj6lazpp6N+d6XNZ97pw=; b=latOsqax6/PmzNPPdGRPsks2ZPHfSiaoVYaka5jMmqwJQavYtzMyOpZciFUpDiWGD8 TQQSjrxyS3NZ/nnLYwJShowJUhlSp7XkMCflOl4PIo6c8VDpH5Rf6NLe0a3eOnvdnAes Oirg6ipeLtjYdqlnKTLR3rlxyPaT/iVlx8VCkmhit7DE+00R9343AfUr4sdVPT/XxIwO DUx9jDVHdcmWuTQ+62j4EEl64EcGNCMQLDZwgUB3/vFNLSGGOMfCugHrLTC4F4AgnNmX amE92hT+gOABTNn3Fdf3D3upUlbiKTf7EoW1w/A+BbgnYlDm6bWnXyVZ9YYGborZRydD YDjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=GgujeBTGY+XPYs5ne3mTkJokj6lazpp6N+d6XNZ97pw=; b=lOMsfmEvUDjry/j8n8OAulhIQbaAi7eipvp/4TOdR8Hr4r1/c3bWSQafLVlmiUBBwz 6uL2i6YFIu+gTIMjcMFcTvKBznaHc4Ca4EexpI3RwY8dcalJICSR7XpaHUFqcZTcXOVm nPllDPx/lEnZ4tWX+PBg2+5hG4q/FNcltG9GACkLT1UF7xuIkeDuIuCHg1yKvWGFwVzq isrlTCxoWrs0rHp73zmdtWN2tDrhMs2baBtqG/bVkrJ332nrhY+KRVGmxSCPXqCJ517P b9S7pW3hOxzOw2Zy7Q2efJiOvrEKJlHJxo5+d9uWmTQI1Z+0Gk/eNI/lcCZdnmoZt3gX Bh2Q== X-Gm-Message-State: AFqh2krZC0m6BxPALwV1vGE+KFjXAY8XoUiti/R2q58G0oFAEC/Igi1g yGZqFCvSrEND5LhCiN5Fblq2H4bvWUjScA== X-Google-Smtp-Source: AMrXdXtcil7hw2SSiTYuJm0VVkKpNKzO555qItFAZvlrYLAajHo5qGGmiFYRAQqSVOhsfjkuf63pFdSVnZe90g== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:aa7:8a45:0:b0:581:6059:b7c8 with SMTP id n5-20020aa78a45000000b005816059b7c8mr1316362pfa.26.1674149773461; Thu, 19 Jan 2023 09:36:13 -0800 (PST) Date: Thu, 19 Jan 2023 09:35:54 -0800 In-Reply-To: <20230119173559.2517103-1-dmatlack@google.com> Mime-Version: 1.0 References: <20230119173559.2517103-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog Message-ID: <20230119173559.2517103-3-dmatlack@google.com> Subject: [PATCH 2/7] KVM: arm64: Use kvm_arch_flush_remote_tlbs() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, David Matlack , Raghavendra Rao Ananta Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org Use kvm_arch_flush_remote_tlbs() instead of CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL. The two mechanisms solve the same problem, allowing architecture-specific code to provide a non-IPI implementation of remote TLB flushing. Dropping CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL allows KVM to standardize all architectures on kvm_arch_flush_remote_tlbs() instead of maintaining two mechanisms. Opt to standardize on kvm_arch_flush_remote_tlbs() since it avoids duplicating the generic TLB stats across architectures that implement their own remote TLB flush. This adds an extra function call to the ARM64 kvm_flush_remote_tlbs() path, but (I assume) that is a small cost in comparison to flushing remote TLBs. No functional change intended. Signed-off-by: David Matlack --- arch/arm64/include/asm/kvm_host.h | 3 +++ arch/arm64/kvm/Kconfig | 1 - arch/arm64/kvm/mmu.c | 6 +++--- virt/kvm/kvm_main.c | 2 -- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 113e20fdbb56..062800f1dc54 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -998,6 +998,9 @@ int __init kvm_set_ipa_limit(void); #define __KVM_HAVE_ARCH_VM_ALLOC struct kvm *kvm_arch_alloc_vm(void); +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS +int kvm_arch_flush_remote_tlbs(struct kvm *kvm); + static inline bool kvm_vm_is_protected(struct kvm *kvm) { return false; diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index ca6eadeb7d1a..e9ac57098a0b 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -25,7 +25,6 @@ menuconfig KVM select MMU_NOTIFIER select PREEMPT_NOTIFIERS select HAVE_KVM_CPU_RELAX_INTERCEPT - select HAVE_KVM_ARCH_TLB_FLUSH_ALL select KVM_MMIO select KVM_GENERIC_DIRTYLOG_READ_PROTECT select KVM_XFER_TO_GUEST_WORK diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 01352f5838a0..8840f65e0e40 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -80,15 +80,15 @@ static bool memslot_is_logging(struct kvm_memory_slot *memslot) } /** - * kvm_flush_remote_tlbs() - flush all VM TLB entries for v7/8 + * kvm_arch_flush_remote_tlbs() - flush all VM TLB entries for v7/8 * @kvm: pointer to kvm structure. * * Interface to HYP function to flush all VM TLB entries */ -void kvm_flush_remote_tlbs(struct kvm *kvm) +int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { - ++kvm->stat.generic.remote_tlb_flush_requests; kvm_call_hyp(__kvm_tlb_flush_vmid, &kvm->arch.mmu); + return 0; } static bool kvm_is_device_pfn(unsigned long pfn) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 277507463678..fefd3e3c8fe1 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -347,7 +347,6 @@ bool kvm_make_all_cpus_request(struct kvm *kvm, unsigned int req) } EXPORT_SYMBOL_GPL(kvm_make_all_cpus_request); -#ifndef CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL void kvm_flush_remote_tlbs(struct kvm *kvm) { ++kvm->stat.generic.remote_tlb_flush_requests; @@ -368,7 +367,6 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) ++kvm->stat.generic.remote_tlb_flush; } EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs); -#endif static void kvm_flush_shadow_all(struct kvm *kvm) { From patchwork Thu Jan 19 17:35:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13108430 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B025C004D4 for ; Thu, 19 Jan 2023 17:36:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230178AbjASRgT (ORCPT ); Thu, 19 Jan 2023 12:36:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230123AbjASRgR (ORCPT ); Thu, 19 Jan 2023 12:36:17 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C6CD48F7CB for ; Thu, 19 Jan 2023 09:36:15 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id m7-20020a170902db0700b00194bd3c810aso1733815plx.23 for ; Thu, 19 Jan 2023 09:36:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3hAJU+LzPVf19y36Q9+8Vrf4nbiTPQzZyGa0NnmG2s0=; b=LncMkMbkcAIyGue9Im85wp+qazsdtyG2pD4++o+icy0IPMJV8lI9tX9+PjLo1kQmni Wo+qI4Pu/FDbulB075D7Bj5uVWpVwH11jl25Huz5SLadFPRwmt96/0kohdgBj2mHfVzm 6vWEUZ5pdB/9lln0z/nXIlf+Ecwo1NSULAcY5mtcAXKtSLUjsKYxOpKOfvtpVDpwT+73 zOyEADFpuPCZGJcbFi84wZfVws4tYTcDGmGejoC0EpWJU3GE4xj88gsVLKDwHZ+ih/U9 mQJa2/FrlPiG/iljAp6TojZYVq7sxazNWHXO0HAp4VvA9tA2c11CBVsQchii7nqhnOA5 kToA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3hAJU+LzPVf19y36Q9+8Vrf4nbiTPQzZyGa0NnmG2s0=; b=JjKVbOD4p74pi+8llmLZaJdl5J1dRmKoGOd8iP08dCrePO2CZ3fB4LGa+i6gx9Dh+7 /hsuxBeUNXxHAVdj9ATQOUUtUbWV4nmc+BODHjq3LL9tWvJ3LwwknffOE47uCDOMQ9lM 1GPDHhpZYKrGWEyJE005UAIsSCeq1017dHvZzvVTEWff1n7varb6Qky0CXwtltqxmDjx 72RAbjyhhasOcnGuSHfDsh68CMU3gni4No1i9ExCMFv6AhJ4mq9dhEgGqpxwyodmhO29 HxgnLAmw17k0yVfXROL23gs4PR3ClQPS7jvP3FmQOS18+8SFKNhsTWYoUDAlJ+xAEDAM usrA== X-Gm-Message-State: AFqh2kqPsYGilAK57KxY4CNggyiKAG6UulN6e0AlFoL1gKDuduLhyFQW 3QDaJB6AZ+ODXyZ7owipObDNNFT02NOXng== X-Google-Smtp-Source: AMrXdXu2MvoeQqs801JuzBUZypqlRJrB9v2DrqnXy2L/ej84ncugCrPjhb3Yc6QUXlCd6vDUHuGhT1biHplZRQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a63:2f04:0:b0:479:1797:4893 with SMTP id v4-20020a632f04000000b0047917974893mr1012500pgv.86.1674149775261; Thu, 19 Jan 2023 09:36:15 -0800 (PST) Date: Thu, 19 Jan 2023 09:35:55 -0800 In-Reply-To: <20230119173559.2517103-1-dmatlack@google.com> Mime-Version: 1.0 References: <20230119173559.2517103-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog Message-ID: <20230119173559.2517103-4-dmatlack@google.com> Subject: [PATCH 3/7] KVM: x86/mmu: Collapse kvm_flush_remote_tlbs_with_{range,address}() together From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, David Matlack , Raghavendra Rao Ananta Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org Collapse kvm_flush_remote_tlbs_with_range() and kvm_flush_remote_tlbs_with_address() into a single function. This eliminates some lines of code and a useless NULL check on the range struct. Opportunistically switch from ENOTSUPP to EOPNOTSUPP to make checkpatch happy. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 19 ++++++------------- 1 file changed, 6 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index aeb240b339f5..7740ca52dab4 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -246,27 +246,20 @@ static inline bool kvm_available_flush_tlb_with_range(void) return kvm_x86_ops.tlb_remote_flush_with_range; } -static void kvm_flush_remote_tlbs_with_range(struct kvm *kvm, - struct kvm_tlb_range *range) -{ - int ret = -ENOTSUPP; - - if (range && kvm_x86_ops.tlb_remote_flush_with_range) - ret = static_call(kvm_x86_tlb_remote_flush_with_range)(kvm, range); - - if (ret) - kvm_flush_remote_tlbs(kvm); -} - void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, u64 start_gfn, u64 pages) { struct kvm_tlb_range range; + int ret = -EOPNOTSUPP; range.start_gfn = start_gfn; range.pages = pages; - kvm_flush_remote_tlbs_with_range(kvm, &range); + if (kvm_x86_ops.tlb_remote_flush_with_range) + ret = static_call(kvm_x86_tlb_remote_flush_with_range)(kvm, &range); + + if (ret) + kvm_flush_remote_tlbs(kvm); } static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn, From patchwork Thu Jan 19 17:35:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13108431 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A1C8C004D4 for ; Thu, 19 Jan 2023 17:36:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230223AbjASRgZ (ORCPT ); Thu, 19 Jan 2023 12:36:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43914 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230151AbjASRgU (ORCPT ); Thu, 19 Jan 2023 12:36:20 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02E288F7E4 for ; Thu, 19 Jan 2023 09:36:19 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-4d4b54d0731so26195947b3.18 for ; Thu, 19 Jan 2023 09:36:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YgIeMtGXDOnExqyujNWC0I9b1VXR1Gfs99VbmAsRW+k=; b=giV2bYNflM1JWbc19J5WY/OrTHA78PbEfEaGruYZpasDOKcsChZyc7ERYncTqgomiZ 9PaIiH7tqSF4adItkEgaawlNmE1F0ueIp9Y2xeYlO4p3UB8iZBoIZYwBwzNTSf/gZmGr 3NKMurcu39ncVL5I6NoG0RlMIEYCIbiOs3BqTH9TTJLRVwhSPYuWqTnVz6sdMZ8sg5tz Q+i63pfT3HowCiwOhETAH2Phm/zP3pxDGwMgc1WCkQiUw9sRd1n7WhY5CLufzlC1bw/Y 5TOre+jDSiOUVTkE/B5jhyHe+OwRwn4l/WTKrw0kv3ubQ5/DMZgUCROeNCDn+/+pFKZX /Utw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YgIeMtGXDOnExqyujNWC0I9b1VXR1Gfs99VbmAsRW+k=; b=wnhfVuwa94+qwdZ+w9WKFDz9sUVsC7aWyX9OrKz3PbgXPwWYryHvBvLYGo1qsuGci/ a9cZc5aDp/OhyNignq187DpKUiozdEzNXIDQDXkQgBe3sAcWy+THfZyn5uRzANaHo1V7 jBS1rw9Sn+3jrHJqE4STAnFkkf5m4KOVy0R0+TM7a6xfHcKy+wbxx1nuferRM54t8pAu PNEmXyvF+eVmAp2g/oJ4iNxfgMBAA9M3QyTw0+UB2NVgmUDjV1WnB85GWm3z2hc7xG1Q Hk8bl2pLqX1IxBUw8dkvQcg9QjRuLBukWBR52Q6xSoBrg30Mk0Z5e+AamVD9BppjMmgi alDg== X-Gm-Message-State: AFqh2koPPP3qI46UzTIIiEwfV76nEXRbMraBGNnVRJHwR/+9jgBIv/Nq Xilrs+N5zu9nvAHIfF8CLtWsdEVjCedHuA== X-Google-Smtp-Source: AMrXdXv2vGhlzkyCDtRgVIwxrqEebGv4h47Grh18nzdI7jcrY6V2k//XsBdUo25o0Vf4cjHFmIEsjoCzUjnkfQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a81:e41:0:b0:4ed:612d:2559 with SMTP id 62-20020a810e41000000b004ed612d2559mr1400135ywo.46.1674149778116; Thu, 19 Jan 2023 09:36:18 -0800 (PST) Date: Thu, 19 Jan 2023 09:35:56 -0800 In-Reply-To: <20230119173559.2517103-1-dmatlack@google.com> Mime-Version: 1.0 References: <20230119173559.2517103-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog Message-ID: <20230119173559.2517103-5-dmatlack@google.com> Subject: [PATCH 4/7] KVM: x86/mmu: Rename kvm_flush_remote_tlbs_with_address() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, David Matlack , Raghavendra Rao Ananta Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org Rename kvm_flush_remote_tlbs_with_address() to kvm_flush_remote_tlbs_range(). This name is shorter, which reduces the number of callsites that need to be broken up across multiple lines, and more readable since it conveys a range of memory is being flushed rather than a single address. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 36 +++++++++++++++------------------ arch/x86/kvm/mmu/mmu_internal.h | 3 +-- arch/x86/kvm/mmu/paging_tmpl.h | 4 ++-- arch/x86/kvm/mmu/tdp_mmu.c | 7 +++---- 4 files changed, 22 insertions(+), 28 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7740ca52dab4..36ce3110b7da 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -246,8 +246,7 @@ static inline bool kvm_available_flush_tlb_with_range(void) return kvm_x86_ops.tlb_remote_flush_with_range; } -void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, - u64 start_gfn, u64 pages) +void kvm_flush_remote_tlbs_range(struct kvm *kvm, u64 start_gfn, u64 pages) { struct kvm_tlb_range range; int ret = -EOPNOTSUPP; @@ -806,7 +805,7 @@ static void account_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) kvm_mmu_gfn_disallow_lpage(slot, gfn); if (kvm_mmu_slot_gfn_write_protect(kvm, slot, gfn, PG_LEVEL_4K)) - kvm_flush_remote_tlbs_with_address(kvm, gfn, 1); + kvm_flush_remote_tlbs_range(kvm, gfn, 1); } void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp) @@ -1180,8 +1179,8 @@ static void drop_large_spte(struct kvm *kvm, u64 *sptep, bool flush) drop_spte(kvm, sptep); if (flush) - kvm_flush_remote_tlbs_with_address(kvm, sp->gfn, - KVM_PAGES_PER_HPAGE(sp->role.level)); + kvm_flush_remote_tlbs_range(kvm, sp->gfn, + KVM_PAGES_PER_HPAGE(sp->role.level)); } /* @@ -1462,7 +1461,7 @@ static bool kvm_set_pte_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, } if (need_flush && kvm_available_flush_tlb_with_range()) { - kvm_flush_remote_tlbs_with_address(kvm, gfn, 1); + kvm_flush_remote_tlbs_range(kvm, gfn, 1); return false; } @@ -1632,8 +1631,8 @@ static void __rmap_add(struct kvm *kvm, kvm->stat.max_mmu_rmap_size = rmap_count; if (rmap_count > RMAP_RECYCLE_THRESHOLD) { kvm_zap_all_rmap_sptes(kvm, rmap_head); - kvm_flush_remote_tlbs_with_address( - kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); + kvm_flush_remote_tlbs_range(kvm, sp->gfn, + KVM_PAGES_PER_HPAGE(sp->role.level)); } } @@ -2398,7 +2397,7 @@ static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep, return; drop_parent_pte(child, sptep); - kvm_flush_remote_tlbs_with_address(vcpu->kvm, child->gfn, 1); + kvm_flush_remote_tlbs_range(vcpu->kvm, child->gfn, 1); } } @@ -2882,8 +2881,8 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, } if (flush) - kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, - KVM_PAGES_PER_HPAGE(level)); + kvm_flush_remote_tlbs_range(vcpu->kvm, gfn, + KVM_PAGES_PER_HPAGE(level)); pgprintk("%s: setting spte %llx\n", __func__, *sptep); @@ -5814,9 +5813,8 @@ slot_handle_level_range(struct kvm *kvm, const struct kvm_memory_slot *memslot, if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) { if (flush && flush_on_yield) { - kvm_flush_remote_tlbs_with_address(kvm, - start_gfn, - iterator.gfn - start_gfn + 1); + kvm_flush_remote_tlbs_range(kvm, start_gfn, + iterator.gfn - start_gfn + 1); flush = false; } cond_resched_rwlock_write(&kvm->mmu_lock); @@ -6171,8 +6169,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) } if (flush) - kvm_flush_remote_tlbs_with_address(kvm, gfn_start, - gfn_end - gfn_start); + kvm_flush_remote_tlbs_range(kvm, gfn_start, gfn_end - gfn_start); kvm_mmu_invalidate_end(kvm, 0, -1ul); @@ -6511,8 +6508,8 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, kvm_zap_one_rmap_spte(kvm, rmap_head, sptep); if (kvm_available_flush_tlb_with_range()) - kvm_flush_remote_tlbs_with_address(kvm, sp->gfn, - KVM_PAGES_PER_HPAGE(sp->role.level)); + kvm_flush_remote_tlbs_range(kvm, sp->gfn, + KVM_PAGES_PER_HPAGE(sp->role.level)); else need_tlb_flush = 1; @@ -6562,8 +6559,7 @@ void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, * is observed by any other operation on the same memslot. */ lockdep_assert_held(&kvm->slots_lock); - kvm_flush_remote_tlbs_with_address(kvm, memslot->base_gfn, - memslot->npages); + kvm_flush_remote_tlbs_range(kvm, memslot->base_gfn, memslot->npages); } void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index ac00bfbf32f6..e606a6d5e040 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -164,8 +164,7 @@ void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, struct kvm_memory_slot *slot, u64 gfn, int min_level); -void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, - u64 start_gfn, u64 pages); +void kvm_flush_remote_tlbs_range(struct kvm *kvm, u64 start_gfn, u64 pages); unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); extern int nx_huge_pages; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index e5662dbd519c..fdad03f131c8 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -929,8 +929,8 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) mmu_page_zap_pte(vcpu->kvm, sp, sptep, NULL); if (is_shadow_present_pte(old_spte)) - kvm_flush_remote_tlbs_with_address(vcpu->kvm, - sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); + kvm_flush_remote_tlbs_range(vcpu->kvm, sp->gfn, + KVM_PAGES_PER_HPAGE(sp->role.level)); if (!rmap_can_add(vcpu)) break; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index bba33aea0fb0..7c21d15c58d8 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -680,8 +680,7 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm, if (ret) return ret; - kvm_flush_remote_tlbs_with_address(kvm, iter->gfn, - KVM_PAGES_PER_HPAGE(iter->level)); + kvm_flush_remote_tlbs_range(kvm, iter->gfn, KVM_PAGES_PER_HPAGE(iter->level)); /* * No other thread can overwrite the removed SPTE as they must either @@ -1080,8 +1079,8 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, return RET_PF_RETRY; else if (is_shadow_present_pte(iter->old_spte) && !is_last_spte(iter->old_spte, iter->level)) - kvm_flush_remote_tlbs_with_address(vcpu->kvm, sp->gfn, - KVM_PAGES_PER_HPAGE(iter->level + 1)); + kvm_flush_remote_tlbs_range(vcpu->kvm, sp->gfn, + KVM_PAGES_PER_HPAGE(iter->level + 1)); /* * If the page fault was caused by a write but the page is write From patchwork Thu Jan 19 17:35:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13108432 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCE94C46467 for ; Thu, 19 Jan 2023 17:36:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230021AbjASRg1 (ORCPT ); Thu, 19 Jan 2023 12:36:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230194AbjASRgV (ORCPT ); Thu, 19 Jan 2023 12:36:21 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 87D438F7E9 for ; Thu, 19 Jan 2023 09:36:20 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id x12-20020a17090abc8c00b00229f8cb27a5so425163pjr.1 for ; Thu, 19 Jan 2023 09:36:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=c0SS1XLwsBFZ40upMmt6qRjU4c0qpX4m+4RA3lz+crc=; b=bRTUL9THslA1WY3C6zfQ1MAdQYPay9Jt3idYcFlPDFpT0P4o1+XdpDhUeIk2USmwqG D3KWQ2FFy3WJkFP9lARnX3e2e5+KhWdYytuOptPQtHynN/RZhg3lvRYJ1+tDBSadgtps wX3mEXowuzhodMcGekL2PsM6hfEXUHRivXQGF0Q29dG33K4Rv5+lelnHW3mcP/8E3OxY qNPsUHH4GMaFwBF+QLlhLNR1zHxYzEPnBpWfOlG2W1lShRazOpwLppupOCoEm3ciHxk3 OsbBJeAM+1GFVBPMGCcS7Q866gvCz+lnaIzAW6Qyf3Bq9YevZj2TcF9nD2jvuYi68kxn b2Mg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=c0SS1XLwsBFZ40upMmt6qRjU4c0qpX4m+4RA3lz+crc=; b=FLMhWwDp74aVqhfcm1eMxKTNfJLz5umVDl1NOqqnlYkN6GsB1dgkrSlFbRgZ2y+4Eg cZ9XWxwfN8ZsMiIwfe1IEtAsv3BMsVeDj4Io88qskpZjUh3mnz+0489AJdLUKurbpQLD I/KeM0RI2dHBeJ9IiLlFuIa+PfX2WXrkjzV1wn92LYn/AxFVCCAnan1v25swLYp+GM2W BhfX1JoqiqcHpsQukgKK/0BleLnTVmoVp6oP9xHm3tO5bKHG0OIh8/kV5bzSszomzHgO 4s9TpGnvIiPjNYwLXwyrlgMg9NRblFEoZFbe5FxetoNE9KnKeeRa97iQwr6k1yah6WDs oK/Q== X-Gm-Message-State: AFqh2kqGDXOTMpKGVWUOcLlC2Vz9sakcvUHdl8rfYweP7t+xuLRwjxO5 GtW/gvO4ft+UbA1CYjGT5eByNJXvtxGSgg== X-Google-Smtp-Source: AMrXdXvbTmwzuJAuBPKJsyHR0snVOekSSTxs16QNlqhF8cG4iLKN5DvuKgwpJj8AOixc6btzxFQ4Hm+pmC9GRQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a62:ae10:0:b0:580:9b0b:4fde with SMTP id q16-20020a62ae10000000b005809b0b4fdemr1243710pff.49.1674149779734; Thu, 19 Jan 2023 09:36:19 -0800 (PST) Date: Thu, 19 Jan 2023 09:35:57 -0800 In-Reply-To: <20230119173559.2517103-1-dmatlack@google.com> Mime-Version: 1.0 References: <20230119173559.2517103-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog Message-ID: <20230119173559.2517103-6-dmatlack@google.com> Subject: [PATCH 5/7] KVM: x86/MMU: Use gfn_t in kvm_flush_remote_tlbs_range() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, David Matlack , Raghavendra Rao Ananta Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org Use gfn_t instead of u64 for the start_gfn parameter to kvm_flush_remote_tlbs_range(), since that is the standard type for GFNs throughout KVM. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/mmu_internal.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 36ce3110b7da..1e2c2d711dbb 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -246,7 +246,7 @@ static inline bool kvm_available_flush_tlb_with_range(void) return kvm_x86_ops.tlb_remote_flush_with_range; } -void kvm_flush_remote_tlbs_range(struct kvm *kvm, u64 start_gfn, u64 pages) +void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages) { struct kvm_tlb_range range; int ret = -EOPNOTSUPP; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index e606a6d5e040..851982a25502 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -164,7 +164,7 @@ void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, struct kvm_memory_slot *slot, u64 gfn, int min_level); -void kvm_flush_remote_tlbs_range(struct kvm *kvm, u64 start_gfn, u64 pages); +void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages); unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); extern int nx_huge_pages; From patchwork Thu Jan 19 17:35:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13108433 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D207DC38142 for ; Thu, 19 Jan 2023 17:36:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230266AbjASRgd (ORCPT ); Thu, 19 Jan 2023 12:36:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230210AbjASRgX (ORCPT ); Thu, 19 Jan 2023 12:36:23 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EB4278F7E0 for ; Thu, 19 Jan 2023 09:36:21 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id mn19-20020a17090b189300b00229ff598eadso193085pjb.8 for ; Thu, 19 Jan 2023 09:36:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uRsWL56kSi67VtsSY7eQAHDzJKepa9d7c1+Ibj0vcQE=; b=gdGKLjSmqiX0cZ4pDCgzo2Y0Hyb2rjNgz7MfmYG1QliR3dntIqZmp0W8DNiCubrZO1 XsKZLH7MrwgaCgv+5+A2LrwtrkYwB/JhwPa0G9QSb1a+ZOBIs/d+0rd6/3sCTB3he/EF w+T3HbOsaFwiWm2ryF+AZedsM3RjPq7l/aAuD6VoXBPScK1ekZ0lYVASS6RGuJaLya4i y4D1SF2w+A4KK/c/bvLEOBKiq8/KzVXEUr+4XVpi7H/56swFVF2GGkxBdeI5TeL1PB5t UFJqbf3AxHGvKpyee1QezxYYmacb1TRuX4uDlTJBNOQNoQU7VIrHZHHOLMTZcAImPV3R mrNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uRsWL56kSi67VtsSY7eQAHDzJKepa9d7c1+Ibj0vcQE=; b=eec5vzUFqSUeXQF8vaCrSRo2lOqTs1Q6rOf13ka36N8piHXAbuxmt4qI+CrnwxjBkX rSCN6C58TD03OBHxRsZUa0cRG1PCs6P1yU2Ca5ihylWb8JGyl8j2eq/gzuB7hI1DZoVR vvurIIh/wIvROTZ7R/WSKPinvk3XxJfpv4xLE8/9Mfj1RcYuncU8KlX99levBg/BgXA/ CkqJpIJCyFA5rGZnfS9y+f8LdqV+6/E99KE/DPI/lQjkYfQM9XJylvATXGmLk2adrwNQ UyvVwO+K/Cc5mpwvoXvjkmklxJ0THkxWnE6yNU2xygQoeJbKBh1JbceVza8lPMfl7lCv cOfg== X-Gm-Message-State: AFqh2kq0GLpovOj3UgAO/yrGz4BgaorxttUwhbHfa1S4VU6ra2KLqZjE GnAlK3bujWDCcb/CuPfIv78lSsD2NcbyVQ== X-Google-Smtp-Source: AMrXdXt+vZkjJ1zTNoZ5Qo+DVNwjuzxDWfY3Xw9nwrJNx+t6oQZNxW3uxoy7KZtRAAY7IFKfRsyg6/TxW0wYVw== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a63:3116:0:b0:477:7f69:2749 with SMTP id x22-20020a633116000000b004777f692749mr930497pgx.372.1674149781458; Thu, 19 Jan 2023 09:36:21 -0800 (PST) Date: Thu, 19 Jan 2023 09:35:58 -0800 In-Reply-To: <20230119173559.2517103-1-dmatlack@google.com> Mime-Version: 1.0 References: <20230119173559.2517103-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog Message-ID: <20230119173559.2517103-7-dmatlack@google.com> Subject: [PATCH 6/7] KVM: Allow range-based TLB invalidation from common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, David Matlack , Raghavendra Rao Ananta Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org Make kvm_flush_remote_tlbs_range() visible in common code and create a default implementation that just invalidates the whole TLB. This paves the way for several future cleanups: - Introduction of range-based TLBI on ARM. - Eliminating kvm_arch_flush_remote_tlbs_memslot() - Moving the KVM/x86 TDP MMU to common code. No functional change intended. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_host.h | 3 +++ arch/x86/kvm/mmu/mmu.c | 5 ++--- arch/x86/kvm/mmu/mmu_internal.h | 1 - include/linux/kvm_host.h | 9 +++++++++ virt/kvm/kvm_main.c | 13 +++++++++++++ 5 files changed, 27 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 1bacc3de2432..420713ac8916 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1799,6 +1799,9 @@ static inline int kvm_arch_flush_remote_tlbs(struct kvm *kvm) return -ENOTSUPP; } +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages); + #define kvm_arch_pmi_in_guest(vcpu) \ ((vcpu) && (vcpu)->arch.handling_intr_from_guest) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1e2c2d711dbb..491c28d22cbe 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -246,7 +246,7 @@ static inline bool kvm_available_flush_tlb_with_range(void) return kvm_x86_ops.tlb_remote_flush_with_range; } -void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages) +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages) { struct kvm_tlb_range range; int ret = -EOPNOTSUPP; @@ -257,8 +257,7 @@ void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages) if (kvm_x86_ops.tlb_remote_flush_with_range) ret = static_call(kvm_x86_tlb_remote_flush_with_range)(kvm, &range); - if (ret) - kvm_flush_remote_tlbs(kvm); + return ret; } static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn, diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 851982a25502..d5599f2d3f96 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -164,7 +164,6 @@ void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, struct kvm_memory_slot *slot, u64 gfn, int min_level); -void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages); unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); extern int nx_huge_pages; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 76711afe4d17..acfb17d9b44d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1356,6 +1356,7 @@ int kvm_vcpu_yield_to(struct kvm_vcpu *target); void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu, bool usermode_vcpu_not_eligible); void kvm_flush_remote_tlbs(struct kvm *kvm); +void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 pages); #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min); @@ -1484,6 +1485,14 @@ static inline int kvm_arch_flush_remote_tlbs(struct kvm *kvm) } #endif +#ifndef __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE +static inline int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, + gfn_t gfn, u64 pages) +{ + return -EOPNOTSUPP; +} +#endif + #ifdef __KVM_HAVE_ARCH_NONCOHERENT_DMA void kvm_arch_register_noncoherent_dma(struct kvm *kvm); void kvm_arch_unregister_noncoherent_dma(struct kvm *kvm); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index fefd3e3c8fe1..c9fc693a39d9 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -368,6 +368,19 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) } EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs); +void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 pages) +{ + if (!kvm_arch_flush_remote_tlbs_range(kvm, gfn, pages)) + return; + + /* + * Fall back to a flushing entire TLBs if the architecture range-based + * TLB invalidation is unsupported or can't be performed for whatever + * reason. + */ + kvm_flush_remote_tlbs(kvm); +} + static void kvm_flush_shadow_all(struct kvm *kvm) { kvm_arch_flush_shadow_all(kvm); From patchwork Thu Jan 19 17:35:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13108434 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6EE4C38142 for ; Thu, 19 Jan 2023 17:36:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230198AbjASRgg (ORCPT ); Thu, 19 Jan 2023 12:36:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230234AbjASRgc (ORCPT ); Thu, 19 Jan 2023 12:36:32 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5912F8F7DB for ; Thu, 19 Jan 2023 09:36:24 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id j16-20020a170902da9000b00194c056109eso1726319plx.18 for ; Thu, 19 Jan 2023 09:36:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dp8qojpwap4fYIHgXdz1D2JV/TwrrbRohd5yUhJybyQ=; b=H/BI9ESPrITtcSVcjET6jzac76aduirOzUyEK/YoKMEruClE2yiXCPYMGIOXcKnKup iNpiHxnBYLEIPj3wdADtupmxfC8G6McHZKEOSJ/CFCRRgt9Drd7uJsbcmWsUOqpAGTGD prwqqAb1u/JrmBIE75trjHaMsPYqGPmL5iWwr10EuZU6RUfJWI+nH6QH/oanl4yr2VfB 8AWG5j3mJRzwWi6EmTQkVE5eBisgW10Jpll7uujTrEF+bwnqufQonaA7EBanC72JhKpO G6vmCpkq2ED385zKBbRJDAZPo57ZEOS9Tta8r6YGPvYnqz6k9VQEb2t96Vh91/aHXN3/ 5nlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dp8qojpwap4fYIHgXdz1D2JV/TwrrbRohd5yUhJybyQ=; b=h9+VKIA+foZwbuqUF/+/tjM6pWb6HrOJE3yQvUPSf5JFwDXYMAV94clK3LEcQE382K tlgiAfcHxXKErFmYhqDPkX57TINsbfYrx1TB8niNyi/uUc4f9+gAyzrwR07464THqeQc Uc74KZEZ22DoVjKqfB5OifWaX6Q9KMa0tBND9/pUOC3uVh+I2ZwSIINouaBYxpYlH3a4 08mOWejo6e3SpMB+fT6YO/s7su05c2PxC5/6HqCtpmTMlWeZtsRRU4c+2Q+Ze3bllpIH uPpvOhGmfGQge0hXKW4ZM12/uNRmM68P3c8FgbnxXX1wEwPA7ghLBdQkef67av4/gDJT mu9g== X-Gm-Message-State: AFqh2kp85wvXYEwkWJ/iaHCvyQtnIXUyt/pddUg34eZDnYYsN3RXV48d 8OOviS5ui0i3cULAUVTXUVdoQGrLqobQ3g== X-Google-Smtp-Source: AMrXdXtaXzJMSZZ9D1fSSj2h0Tgh0mFBeEt+RGNJ3TxELmJ4QuEIMQy48RX0nEqQ+628tznT8bbHFGEd6DobHQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a17:90b:1494:b0:225:eaa2:3f5d with SMTP id js20-20020a17090b149400b00225eaa23f5dmr486845pjb.2.1674149783183; Thu, 19 Jan 2023 09:36:23 -0800 (PST) Date: Thu, 19 Jan 2023 09:35:59 -0800 In-Reply-To: <20230119173559.2517103-1-dmatlack@google.com> Mime-Version: 1.0 References: <20230119173559.2517103-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog Message-ID: <20230119173559.2517103-8-dmatlack@google.com> Subject: [PATCH 7/7] KVM: Move kvm_arch_flush_remote_tlbs_memslot() to common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, David Matlack , Raghavendra Rao Ananta Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org Move kvm_arch_flush_remote_tlbs_memslot() to common code and drop "arch_" from the name. kvm_arch_flush_remote_tlbs_memslot() is just a range-based TLB invalidation where the range is defined by the memslot. Now that kvm_flush_remote_tlbs_range() can be called from common code we can just use that and drop a bunch of duplicate code from the arch directories. Note this adds a lockdep assertion for slots_lock being held when calling kvm_flush_remote_tlbs_memslot(), which was previously only asserted on x86. MIPS has calls to kvm_flush_remote_tlbs_memslot(), but they all hold the slots_lock, so the lockdep assertion continues to hold true. Also drop the CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT ifdef gating kvm_flush_remote_tlbs_memslot(), since it is no longer necessary. Signed-off-by: David Matlack --- arch/arm64/kvm/arm.c | 6 ------ arch/mips/kvm/mips.c | 10 ++-------- arch/riscv/kvm/mmu.c | 6 ------ arch/x86/kvm/mmu/mmu.c | 16 +--------------- arch/x86/kvm/x86.c | 2 +- include/linux/kvm_host.h | 7 +++---- virt/kvm/kvm_main.c | 18 ++++++++++++++++-- 7 files changed, 23 insertions(+), 42 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 698787ed87e9..54d5d0733b98 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1420,12 +1420,6 @@ void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) } -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot) -{ - kvm_flush_remote_tlbs(kvm); -} - static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm, struct kvm_arm_device_addr *dev_addr) { diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index 2e54e5fd8daa..9f9a7ba7eb2b 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -199,7 +199,7 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm, /* Flush slot from GPA */ kvm_mips_flush_gpa_pt(kvm, slot->base_gfn, slot->base_gfn + slot->npages - 1); - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + kvm_flush_remote_tlbs_memslot(kvm, slot); spin_unlock(&kvm->mmu_lock); } @@ -235,7 +235,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, needs_flush = kvm_mips_mkclean_gpa_pt(kvm, new->base_gfn, new->base_gfn + new->npages - 1); if (needs_flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, new); + kvm_flush_remote_tlbs_memslot(kvm, new); spin_unlock(&kvm->mmu_lock); } } @@ -987,12 +987,6 @@ int kvm_arch_flush_remote_tlbs(struct kvm *kvm) return 1; } -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot) -{ - kvm_flush_remote_tlbs(kvm); -} - long kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { long r; diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 66ef19676fe4..87f30487f59f 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -406,12 +406,6 @@ void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) { } -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot) -{ - kvm_flush_remote_tlbs(kvm); -} - void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free) { } diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 491c28d22cbe..4af85888c98b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6528,7 +6528,7 @@ static void kvm_rmap_zap_collapsible_sptes(struct kvm *kvm, */ if (slot_handle_level(kvm, slot, kvm_mmu_zap_collapsible_spte, PG_LEVEL_4K, KVM_MAX_HUGEPAGE_LEVEL - 1, true)) - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + kvm_flush_remote_tlbs_memslot(kvm, slot); } void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, @@ -6547,20 +6547,6 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, } } -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot) -{ - /* - * All current use cases for flushing the TLBs for a specific memslot - * related to dirty logging, and many do the TLB flush out of mmu_lock. - * The interaction between the various operations on memslot must be - * serialized by slots_locks to ensure the TLB flush from one operation - * is observed by any other operation on the same memslot. - */ - lockdep_assert_held(&kvm->slots_lock); - kvm_flush_remote_tlbs_range(kvm, memslot->base_gfn, memslot->npages); -} - void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, const struct kvm_memory_slot *memslot) { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 508074e47bc0..ea7bb4035a60 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12617,7 +12617,7 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm, * See is_writable_pte() for more details (the case involving * access-tracked SPTEs is particularly relevant). */ - kvm_arch_flush_remote_tlbs_memslot(kvm, new); + kvm_flush_remote_tlbs_memslot(kvm, new); } } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index acfb17d9b44d..12dfecd27c9d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1357,6 +1357,8 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu, bool usermode_vcpu_not_eligible); void kvm_flush_remote_tlbs(struct kvm *kvm); void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 pages); +void kvm_flush_remote_tlbs_memslot(struct kvm *kvm, + const struct kvm_memory_slot *memslot); #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min); @@ -1385,10 +1387,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, unsigned long mask); void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot); -#ifdef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot); -#else /* !CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT */ +#ifndef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log); int kvm_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log, int *is_dirty, struct kvm_memory_slot **memslot); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index c9fc693a39d9..9c10cd191a71 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -381,6 +381,20 @@ void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 pages) kvm_flush_remote_tlbs(kvm); } +void kvm_flush_remote_tlbs_memslot(struct kvm *kvm, + const struct kvm_memory_slot *memslot) +{ + /* + * All current use cases for flushing the TLBs for a specific memslot + * related to dirty logging, and many do the TLB flush out of mmu_lock. + * The interaction between the various operations on memslot must be + * serialized by slots_locks to ensure the TLB flush from one operation + * is observed by any other operation on the same memslot. + */ + lockdep_assert_held(&kvm->slots_lock); + kvm_flush_remote_tlbs_range(kvm, memslot->base_gfn, memslot->npages); +} + static void kvm_flush_shadow_all(struct kvm *kvm) { kvm_arch_flush_shadow_all(kvm); @@ -2188,7 +2202,7 @@ static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log) } if (flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, memslot); + kvm_flush_remote_tlbs_memslot(kvm, memslot); if (copy_to_user(log->dirty_bitmap, dirty_bitmap_buffer, n)) return -EFAULT; @@ -2305,7 +2319,7 @@ static int kvm_clear_dirty_log_protect(struct kvm *kvm, KVM_MMU_UNLOCK(kvm); if (flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, memslot); + kvm_flush_remote_tlbs_memslot(kvm, memslot); return 0; }