From patchwork Thu Jan 19 17:35:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13108440 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB0E1C6379F for ; Thu, 19 Jan 2023 17:36:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230157AbjASRgR (ORCPT ); Thu, 19 Jan 2023 12:36:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43740 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230119AbjASRgN (ORCPT ); Thu, 19 Jan 2023 12:36:13 -0500 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 472BA5B5AD for ; Thu, 19 Jan 2023 09:36:12 -0800 (PST) Received: by mail-pg1-x54a.google.com with SMTP id e11-20020a63d94b000000b0048988ed9a6cso1318924pgj.1 for ; Thu, 19 Jan 2023 09:36:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Im+iiLjR8kItTyPOE/eTt5JfrQ6vH6VP6QUOOm0Kk1I=; b=NDaypEfM14bF96rZQ8+HhBj+hk7vULL62aBgTYqOMJZbA/TFlcBoOs9sPPdVNfYfNe hy+wfSSH4BDSz1WThBSYqt2sAf1ehWletnOkE7HUQytb+gI9Re8OgzkWS2mo1I6jEgoP A5YK5ZDo9WyQsEjxWBT4rvnxMYwhUntLdJ3c9JnoqQ/mZ7YKkBJREatylJnm/6q91ADW UAGK49Ncp5vq94p02BqMu7KO0bKoiDBTYjkeh1wCPeC5Jistb1klwpTxOTnWW3mROQTd HXspEOaka5MwYa6n+2RX9A6Q2UndlbAAikglk4E+3jUfk4fiFTsd+YClS+rJmA2nHT4G mg5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Im+iiLjR8kItTyPOE/eTt5JfrQ6vH6VP6QUOOm0Kk1I=; b=M08RR+Qt0lVDP50iYNJ8MfkPHvFiIMSAPYfDj+ZesbM2thmYiPXueIlfsYVGkyn1u9 trvMfUf3Z3wrkj1MpkRilSnkkDqtCHrx1oeWxCkIFy1b3qMGzEXsdjy+oAZ4lZeZhqjU eJV1MfB6q6GDJrsW5O/UF6CGXlsRDu6dTPi5teHFCTuj93kl0LIXCaVAFuuG/7vaC1zm xm0SyoZfuvhp/jN1V/I9HZAyJFFVY+uhc42a0NFlGZVVH4hm9BvRfNeDk6OrVEG9710F O+NsFtfTpIHvAwIqGQi9ouvL6EstcqHDBvHUz9Lnen6GA6y9AD5X2ZmDQDNHtuQMtgbX egeQ== X-Gm-Message-State: AFqh2kptV1/tvS/ovBjXjTjZOwyrRdhvAzsU6r9JQ7CfITVxRtoDxmDZ sHXfyEWpdKHoD/dHYFys2zxGjo1vXU8Ciw== X-Google-Smtp-Source: AMrXdXsA1M5z0iLozIPBN4P/zxpeV5zGDay7QgahgmS5l1uDJTAuZVCtHiq4az1VYA3Tk0bmeBp7KsSDRfSGJg== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a17:902:f24a:b0:192:ed88:bc8f with SMTP id j10-20020a170902f24a00b00192ed88bc8fmr1232044plc.25.1674149771693; Thu, 19 Jan 2023 09:36:11 -0800 (PST) Date: Thu, 19 Jan 2023 09:35:53 -0800 In-Reply-To: <20230119173559.2517103-1-dmatlack@google.com> Mime-Version: 1.0 References: <20230119173559.2517103-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog Message-ID: <20230119173559.2517103-2-dmatlack@google.com> Subject: [PATCH 1/7] KVM: Rename kvm_arch_flush_remote_tlb() to kvm_arch_flush_remote_tlbs() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, David Matlack , Raghavendra Rao Ananta Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Rename kvm_arch_flush_remote_tlb() and the associated macro __KVM_HAVE_ARCH_FLUSH_REMOTE_TLB to kvm_arch_flush_remote_tlbs() and __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS respectively. Making the name plural matches kvm_flush_remote_tlbs() and makes it more clear that this function can affect more than one remote TLB. No functional change intended. Signed-off-by: David Matlack --- arch/mips/include/asm/kvm_host.h | 4 ++-- arch/mips/kvm/mips.c | 2 +- arch/x86/include/asm/kvm_host.h | 4 ++-- include/linux/kvm_host.h | 4 ++-- virt/kvm/kvm_main.c | 2 +- 5 files changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h index 2803c9c21ef9..849eb482ad15 100644 --- a/arch/mips/include/asm/kvm_host.h +++ b/arch/mips/include/asm/kvm_host.h @@ -896,7 +896,7 @@ static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {} -#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLB -int kvm_arch_flush_remote_tlb(struct kvm *kvm); +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS +int kvm_arch_flush_remote_tlbs(struct kvm *kvm); #endif /* __MIPS_KVM_HOST_H__ */ diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index 36c8991b5d39..2e54e5fd8daa 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -981,7 +981,7 @@ void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) } -int kvm_arch_flush_remote_tlb(struct kvm *kvm) +int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { kvm_mips_callbacks->prepare_flush_shadow(kvm); return 1; diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 4d2bc08794e4..1bacc3de2432 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1789,8 +1789,8 @@ static inline struct kvm *kvm_arch_alloc_vm(void) #define __KVM_HAVE_ARCH_VM_FREE void kvm_arch_free_vm(struct kvm *kvm); -#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLB -static inline int kvm_arch_flush_remote_tlb(struct kvm *kvm) +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS +static inline int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { if (kvm_x86_ops.tlb_remote_flush && !static_call(kvm_x86_tlb_remote_flush)(kvm)) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 109b18e2789c..76711afe4d17 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1477,8 +1477,8 @@ static inline void kvm_arch_free_vm(struct kvm *kvm) } #endif -#ifndef __KVM_HAVE_ARCH_FLUSH_REMOTE_TLB -static inline int kvm_arch_flush_remote_tlb(struct kvm *kvm) +#ifndef __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS +static inline int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { return -ENOTSUPP; } diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d255964ec331..277507463678 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -363,7 +363,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) * kvm_make_all_cpus_request() reads vcpu->mode. We reuse that * barrier here. */ - if (!kvm_arch_flush_remote_tlb(kvm) + if (!kvm_arch_flush_remote_tlbs(kvm) || kvm_make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH)) ++kvm->stat.generic.remote_tlb_flush; } From patchwork Thu Jan 19 17:35:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13108441 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9632C38142 for ; Thu, 19 Jan 2023 17:36:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230183AbjASRgT (ORCPT ); Thu, 19 Jan 2023 12:36:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230151AbjASRgP (ORCPT ); Thu, 19 Jan 2023 12:36:15 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 01F577E6BD for ; Thu, 19 Jan 2023 09:36:14 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id o15-20020a170902d4cf00b00194d05746aeso1109772plg.13 for ; Thu, 19 Jan 2023 09:36:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=GgujeBTGY+XPYs5ne3mTkJokj6lazpp6N+d6XNZ97pw=; b=latOsqax6/PmzNPPdGRPsks2ZPHfSiaoVYaka5jMmqwJQavYtzMyOpZciFUpDiWGD8 TQQSjrxyS3NZ/nnLYwJShowJUhlSp7XkMCflOl4PIo6c8VDpH5Rf6NLe0a3eOnvdnAes Oirg6ipeLtjYdqlnKTLR3rlxyPaT/iVlx8VCkmhit7DE+00R9343AfUr4sdVPT/XxIwO DUx9jDVHdcmWuTQ+62j4EEl64EcGNCMQLDZwgUB3/vFNLSGGOMfCugHrLTC4F4AgnNmX amE92hT+gOABTNn3Fdf3D3upUlbiKTf7EoW1w/A+BbgnYlDm6bWnXyVZ9YYGborZRydD YDjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=GgujeBTGY+XPYs5ne3mTkJokj6lazpp6N+d6XNZ97pw=; b=JIYT87zXOmz4g1hJPAFXKqVYdLHiGhbW0GxR4e4fA0lWS0YTTlknjzYTDat4G+ew4e 7HJXTm+X/kloeWL/8yTfgUX56iwXewde0SDI0qROfXIX/O96uDWG3rchxGFaG3MER2W7 p/0+vLMiQgnfec4uiUWtZz7EYItYw1kWXBIU7r7/8HwN85XiJFYvBsVCOOeejwDLx8Ly gqXOSu+6n9wivf1ZABg842zQBFYR/VPH9U4fiTK0Q4WJhfOFxh9x64uQJHqTRlulCOC7 f4iYBMsQw1XfZeC7u5ImPlT/7qmuATVYiRkm7aEH2LBiYxfjKtv7942RaWCcKSTQ2dxl MP6Q== X-Gm-Message-State: AFqh2kqetv71Wl7Wiz5Cca0F+WSrGctnv4Bapc12ZmIXrZr5p7hrNFbY fn2MeTI3xTsI2RX7vgrdgysQtSU70Jh0KQ== X-Google-Smtp-Source: AMrXdXtcil7hw2SSiTYuJm0VVkKpNKzO555qItFAZvlrYLAajHo5qGGmiFYRAQqSVOhsfjkuf63pFdSVnZe90g== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:aa7:8a45:0:b0:581:6059:b7c8 with SMTP id n5-20020aa78a45000000b005816059b7c8mr1316362pfa.26.1674149773461; Thu, 19 Jan 2023 09:36:13 -0800 (PST) Date: Thu, 19 Jan 2023 09:35:54 -0800 In-Reply-To: <20230119173559.2517103-1-dmatlack@google.com> Mime-Version: 1.0 References: <20230119173559.2517103-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog Message-ID: <20230119173559.2517103-3-dmatlack@google.com> Subject: [PATCH 2/7] KVM: arm64: Use kvm_arch_flush_remote_tlbs() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, David Matlack , Raghavendra Rao Ananta Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use kvm_arch_flush_remote_tlbs() instead of CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL. The two mechanisms solve the same problem, allowing architecture-specific code to provide a non-IPI implementation of remote TLB flushing. Dropping CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL allows KVM to standardize all architectures on kvm_arch_flush_remote_tlbs() instead of maintaining two mechanisms. Opt to standardize on kvm_arch_flush_remote_tlbs() since it avoids duplicating the generic TLB stats across architectures that implement their own remote TLB flush. This adds an extra function call to the ARM64 kvm_flush_remote_tlbs() path, but (I assume) that is a small cost in comparison to flushing remote TLBs. No functional change intended. Signed-off-by: David Matlack --- arch/arm64/include/asm/kvm_host.h | 3 +++ arch/arm64/kvm/Kconfig | 1 - arch/arm64/kvm/mmu.c | 6 +++--- virt/kvm/kvm_main.c | 2 -- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 113e20fdbb56..062800f1dc54 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -998,6 +998,9 @@ int __init kvm_set_ipa_limit(void); #define __KVM_HAVE_ARCH_VM_ALLOC struct kvm *kvm_arch_alloc_vm(void); +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS +int kvm_arch_flush_remote_tlbs(struct kvm *kvm); + static inline bool kvm_vm_is_protected(struct kvm *kvm) { return false; diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index ca6eadeb7d1a..e9ac57098a0b 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -25,7 +25,6 @@ menuconfig KVM select MMU_NOTIFIER select PREEMPT_NOTIFIERS select HAVE_KVM_CPU_RELAX_INTERCEPT - select HAVE_KVM_ARCH_TLB_FLUSH_ALL select KVM_MMIO select KVM_GENERIC_DIRTYLOG_READ_PROTECT select KVM_XFER_TO_GUEST_WORK diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 01352f5838a0..8840f65e0e40 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -80,15 +80,15 @@ static bool memslot_is_logging(struct kvm_memory_slot *memslot) } /** - * kvm_flush_remote_tlbs() - flush all VM TLB entries for v7/8 + * kvm_arch_flush_remote_tlbs() - flush all VM TLB entries for v7/8 * @kvm: pointer to kvm structure. * * Interface to HYP function to flush all VM TLB entries */ -void kvm_flush_remote_tlbs(struct kvm *kvm) +int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { - ++kvm->stat.generic.remote_tlb_flush_requests; kvm_call_hyp(__kvm_tlb_flush_vmid, &kvm->arch.mmu); + return 0; } static bool kvm_is_device_pfn(unsigned long pfn) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 277507463678..fefd3e3c8fe1 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -347,7 +347,6 @@ bool kvm_make_all_cpus_request(struct kvm *kvm, unsigned int req) } EXPORT_SYMBOL_GPL(kvm_make_all_cpus_request); -#ifndef CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL void kvm_flush_remote_tlbs(struct kvm *kvm) { ++kvm->stat.generic.remote_tlb_flush_requests; @@ -368,7 +367,6 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) ++kvm->stat.generic.remote_tlb_flush; } EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs); -#endif static void kvm_flush_shadow_all(struct kvm *kvm) { From patchwork Thu Jan 19 17:35:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13108442 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2EE0C46467 for ; Thu, 19 Jan 2023 17:36:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230212AbjASRgX (ORCPT ); Thu, 19 Jan 2023 12:36:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230131AbjASRgR (ORCPT ); Thu, 19 Jan 2023 12:36:17 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D02868F7D2 for ; Thu, 19 Jan 2023 09:36:15 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id i16-20020a17090332d000b00194a7b146b2so1743890plr.20 for ; Thu, 19 Jan 2023 09:36:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3hAJU+LzPVf19y36Q9+8Vrf4nbiTPQzZyGa0NnmG2s0=; b=LncMkMbkcAIyGue9Im85wp+qazsdtyG2pD4++o+icy0IPMJV8lI9tX9+PjLo1kQmni Wo+qI4Pu/FDbulB075D7Bj5uVWpVwH11jl25Huz5SLadFPRwmt96/0kohdgBj2mHfVzm 6vWEUZ5pdB/9lln0z/nXIlf+Ecwo1NSULAcY5mtcAXKtSLUjsKYxOpKOfvtpVDpwT+73 zOyEADFpuPCZGJcbFi84wZfVws4tYTcDGmGejoC0EpWJU3GE4xj88gsVLKDwHZ+ih/U9 mQJa2/FrlPiG/iljAp6TojZYVq7sxazNWHXO0HAp4VvA9tA2c11CBVsQchii7nqhnOA5 kToA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3hAJU+LzPVf19y36Q9+8Vrf4nbiTPQzZyGa0NnmG2s0=; b=KJlQyPGRCm0o+4xiN4tVfvrWCzEFf7vYvYzFZQz79Gl9TmbRk+TSkdQfFDuRITKrUu K+XoqP60MMihdv8/8z8V65wJuoknqn76ne33fTXyCnPgYAFQdRGNyMv9qIbDgatm1ml5 nI96yvM/x1hM+ulXaNB8Oy3neggHZkr3XPDRXegcDB+K1QLDwjFIC3TSIu77/BU4IAY5 4DoSKssWknvQk4nBzi7FIbmT4xEow+iwJdRxOWy0bNT37fx5ARsJlz3Aeu4TsF8hwo5I ZvVncO39qhjosByLH7t33K+0/yTtKto+irz+SPY6NNAxk/UCWCvkjka+m8B7y6etOR9a 4Lbg== X-Gm-Message-State: AFqh2kr66nX0QvoP/c5AsRZhPLoT3d/GLULZVj72D37rah3D7NyH46gK 2pI8MiXojA4IkxwSa4iwK2EK0FFVldHQyg== X-Google-Smtp-Source: AMrXdXu2MvoeQqs801JuzBUZypqlRJrB9v2DrqnXy2L/ej84ncugCrPjhb3Yc6QUXlCd6vDUHuGhT1biHplZRQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a63:2f04:0:b0:479:1797:4893 with SMTP id v4-20020a632f04000000b0047917974893mr1012500pgv.86.1674149775261; Thu, 19 Jan 2023 09:36:15 -0800 (PST) Date: Thu, 19 Jan 2023 09:35:55 -0800 In-Reply-To: <20230119173559.2517103-1-dmatlack@google.com> Mime-Version: 1.0 References: <20230119173559.2517103-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog Message-ID: <20230119173559.2517103-4-dmatlack@google.com> Subject: [PATCH 3/7] KVM: x86/mmu: Collapse kvm_flush_remote_tlbs_with_{range,address}() together From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, David Matlack , Raghavendra Rao Ananta Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Collapse kvm_flush_remote_tlbs_with_range() and kvm_flush_remote_tlbs_with_address() into a single function. This eliminates some lines of code and a useless NULL check on the range struct. Opportunistically switch from ENOTSUPP to EOPNOTSUPP to make checkpatch happy. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 19 ++++++------------- 1 file changed, 6 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index aeb240b339f5..7740ca52dab4 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -246,27 +246,20 @@ static inline bool kvm_available_flush_tlb_with_range(void) return kvm_x86_ops.tlb_remote_flush_with_range; } -static void kvm_flush_remote_tlbs_with_range(struct kvm *kvm, - struct kvm_tlb_range *range) -{ - int ret = -ENOTSUPP; - - if (range && kvm_x86_ops.tlb_remote_flush_with_range) - ret = static_call(kvm_x86_tlb_remote_flush_with_range)(kvm, range); - - if (ret) - kvm_flush_remote_tlbs(kvm); -} - void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, u64 start_gfn, u64 pages) { struct kvm_tlb_range range; + int ret = -EOPNOTSUPP; range.start_gfn = start_gfn; range.pages = pages; - kvm_flush_remote_tlbs_with_range(kvm, &range); + if (kvm_x86_ops.tlb_remote_flush_with_range) + ret = static_call(kvm_x86_tlb_remote_flush_with_range)(kvm, &range); + + if (ret) + kvm_flush_remote_tlbs(kvm); } static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn, From patchwork Thu Jan 19 17:35:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13108443 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF005C6379F for ; Thu, 19 Jan 2023 17:36:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230151AbjASRg1 (ORCPT ); Thu, 19 Jan 2023 12:36:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43928 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230188AbjASRgV (ORCPT ); Thu, 19 Jan 2023 12:36:21 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 055BC8F7E7 for ; Thu, 19 Jan 2023 09:36:19 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id i10-20020a25f20a000000b006ea4f43c0ddso2974684ybe.21 for ; Thu, 19 Jan 2023 09:36:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YgIeMtGXDOnExqyujNWC0I9b1VXR1Gfs99VbmAsRW+k=; b=giV2bYNflM1JWbc19J5WY/OrTHA78PbEfEaGruYZpasDOKcsChZyc7ERYncTqgomiZ 9PaIiH7tqSF4adItkEgaawlNmE1F0ueIp9Y2xeYlO4p3UB8iZBoIZYwBwzNTSf/gZmGr 3NKMurcu39ncVL5I6NoG0RlMIEYCIbiOs3BqTH9TTJLRVwhSPYuWqTnVz6sdMZ8sg5tz Q+i63pfT3HowCiwOhETAH2Phm/zP3pxDGwMgc1WCkQiUw9sRd1n7WhY5CLufzlC1bw/Y 5TOre+jDSiOUVTkE/B5jhyHe+OwRwn4l/WTKrw0kv3ubQ5/DMZgUCROeNCDn+/+pFKZX /Utw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YgIeMtGXDOnExqyujNWC0I9b1VXR1Gfs99VbmAsRW+k=; b=TQ6evZ5Ke4qM2vbXUWt11K9roq6SN+cQDIWjsExyLCzpeeHIbyNuj5oZfL+EFfuAa4 gdVp3u82gFVVrhB5FymnJ7yPYsmb/SpkBvmppFzApvvMlEizvtX5Hm1mkUopBHd+e35g sPbrJdujI5at7nKJPzlZ+hOdRR7IrFwMuYCinTa/Us13RqaPdyIoQP4bADYAg0vkwN3u 3r5eDPVAiCkApWwZzLArWPpIvswYBlhkQ8CbUqYt/ZvXk3uSCoXQs6BxRQGzY5RYI7KI tskUJOSDzWP9VWlShK6lT9tYJF6uIJ6RotGYeaRboToCpF2F1lkisuUzM161wonnJrj+ o0/w== X-Gm-Message-State: AFqh2krRYkG7YnLD06WDS/8iVTvpczsz049kpDSWSXkLtFxrrKBoskKV 8e1d6x57+XDwIqDo+YG0ytXbB2gDIJ2OJQ== X-Google-Smtp-Source: AMrXdXv2vGhlzkyCDtRgVIwxrqEebGv4h47Grh18nzdI7jcrY6V2k//XsBdUo25o0Vf4cjHFmIEsjoCzUjnkfQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a81:e41:0:b0:4ed:612d:2559 with SMTP id 62-20020a810e41000000b004ed612d2559mr1400135ywo.46.1674149778116; Thu, 19 Jan 2023 09:36:18 -0800 (PST) Date: Thu, 19 Jan 2023 09:35:56 -0800 In-Reply-To: <20230119173559.2517103-1-dmatlack@google.com> Mime-Version: 1.0 References: <20230119173559.2517103-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog Message-ID: <20230119173559.2517103-5-dmatlack@google.com> Subject: [PATCH 4/7] KVM: x86/mmu: Rename kvm_flush_remote_tlbs_with_address() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, David Matlack , Raghavendra Rao Ananta Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Rename kvm_flush_remote_tlbs_with_address() to kvm_flush_remote_tlbs_range(). This name is shorter, which reduces the number of callsites that need to be broken up across multiple lines, and more readable since it conveys a range of memory is being flushed rather than a single address. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 36 +++++++++++++++------------------ arch/x86/kvm/mmu/mmu_internal.h | 3 +-- arch/x86/kvm/mmu/paging_tmpl.h | 4 ++-- arch/x86/kvm/mmu/tdp_mmu.c | 7 +++---- 4 files changed, 22 insertions(+), 28 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7740ca52dab4..36ce3110b7da 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -246,8 +246,7 @@ static inline bool kvm_available_flush_tlb_with_range(void) return kvm_x86_ops.tlb_remote_flush_with_range; } -void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, - u64 start_gfn, u64 pages) +void kvm_flush_remote_tlbs_range(struct kvm *kvm, u64 start_gfn, u64 pages) { struct kvm_tlb_range range; int ret = -EOPNOTSUPP; @@ -806,7 +805,7 @@ static void account_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) kvm_mmu_gfn_disallow_lpage(slot, gfn); if (kvm_mmu_slot_gfn_write_protect(kvm, slot, gfn, PG_LEVEL_4K)) - kvm_flush_remote_tlbs_with_address(kvm, gfn, 1); + kvm_flush_remote_tlbs_range(kvm, gfn, 1); } void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp) @@ -1180,8 +1179,8 @@ static void drop_large_spte(struct kvm *kvm, u64 *sptep, bool flush) drop_spte(kvm, sptep); if (flush) - kvm_flush_remote_tlbs_with_address(kvm, sp->gfn, - KVM_PAGES_PER_HPAGE(sp->role.level)); + kvm_flush_remote_tlbs_range(kvm, sp->gfn, + KVM_PAGES_PER_HPAGE(sp->role.level)); } /* @@ -1462,7 +1461,7 @@ static bool kvm_set_pte_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, } if (need_flush && kvm_available_flush_tlb_with_range()) { - kvm_flush_remote_tlbs_with_address(kvm, gfn, 1); + kvm_flush_remote_tlbs_range(kvm, gfn, 1); return false; } @@ -1632,8 +1631,8 @@ static void __rmap_add(struct kvm *kvm, kvm->stat.max_mmu_rmap_size = rmap_count; if (rmap_count > RMAP_RECYCLE_THRESHOLD) { kvm_zap_all_rmap_sptes(kvm, rmap_head); - kvm_flush_remote_tlbs_with_address( - kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); + kvm_flush_remote_tlbs_range(kvm, sp->gfn, + KVM_PAGES_PER_HPAGE(sp->role.level)); } } @@ -2398,7 +2397,7 @@ static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep, return; drop_parent_pte(child, sptep); - kvm_flush_remote_tlbs_with_address(vcpu->kvm, child->gfn, 1); + kvm_flush_remote_tlbs_range(vcpu->kvm, child->gfn, 1); } } @@ -2882,8 +2881,8 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, } if (flush) - kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, - KVM_PAGES_PER_HPAGE(level)); + kvm_flush_remote_tlbs_range(vcpu->kvm, gfn, + KVM_PAGES_PER_HPAGE(level)); pgprintk("%s: setting spte %llx\n", __func__, *sptep); @@ -5814,9 +5813,8 @@ slot_handle_level_range(struct kvm *kvm, const struct kvm_memory_slot *memslot, if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) { if (flush && flush_on_yield) { - kvm_flush_remote_tlbs_with_address(kvm, - start_gfn, - iterator.gfn - start_gfn + 1); + kvm_flush_remote_tlbs_range(kvm, start_gfn, + iterator.gfn - start_gfn + 1); flush = false; } cond_resched_rwlock_write(&kvm->mmu_lock); @@ -6171,8 +6169,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) } if (flush) - kvm_flush_remote_tlbs_with_address(kvm, gfn_start, - gfn_end - gfn_start); + kvm_flush_remote_tlbs_range(kvm, gfn_start, gfn_end - gfn_start); kvm_mmu_invalidate_end(kvm, 0, -1ul); @@ -6511,8 +6508,8 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, kvm_zap_one_rmap_spte(kvm, rmap_head, sptep); if (kvm_available_flush_tlb_with_range()) - kvm_flush_remote_tlbs_with_address(kvm, sp->gfn, - KVM_PAGES_PER_HPAGE(sp->role.level)); + kvm_flush_remote_tlbs_range(kvm, sp->gfn, + KVM_PAGES_PER_HPAGE(sp->role.level)); else need_tlb_flush = 1; @@ -6562,8 +6559,7 @@ void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, * is observed by any other operation on the same memslot. */ lockdep_assert_held(&kvm->slots_lock); - kvm_flush_remote_tlbs_with_address(kvm, memslot->base_gfn, - memslot->npages); + kvm_flush_remote_tlbs_range(kvm, memslot->base_gfn, memslot->npages); } void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index ac00bfbf32f6..e606a6d5e040 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -164,8 +164,7 @@ void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, struct kvm_memory_slot *slot, u64 gfn, int min_level); -void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, - u64 start_gfn, u64 pages); +void kvm_flush_remote_tlbs_range(struct kvm *kvm, u64 start_gfn, u64 pages); unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); extern int nx_huge_pages; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index e5662dbd519c..fdad03f131c8 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -929,8 +929,8 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) mmu_page_zap_pte(vcpu->kvm, sp, sptep, NULL); if (is_shadow_present_pte(old_spte)) - kvm_flush_remote_tlbs_with_address(vcpu->kvm, - sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); + kvm_flush_remote_tlbs_range(vcpu->kvm, sp->gfn, + KVM_PAGES_PER_HPAGE(sp->role.level)); if (!rmap_can_add(vcpu)) break; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index bba33aea0fb0..7c21d15c58d8 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -680,8 +680,7 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm, if (ret) return ret; - kvm_flush_remote_tlbs_with_address(kvm, iter->gfn, - KVM_PAGES_PER_HPAGE(iter->level)); + kvm_flush_remote_tlbs_range(kvm, iter->gfn, KVM_PAGES_PER_HPAGE(iter->level)); /* * No other thread can overwrite the removed SPTE as they must either @@ -1080,8 +1079,8 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, return RET_PF_RETRY; else if (is_shadow_present_pte(iter->old_spte) && !is_last_spte(iter->old_spte, iter->level)) - kvm_flush_remote_tlbs_with_address(vcpu->kvm, sp->gfn, - KVM_PAGES_PER_HPAGE(iter->level + 1)); + kvm_flush_remote_tlbs_range(vcpu->kvm, sp->gfn, + KVM_PAGES_PER_HPAGE(iter->level + 1)); /* * If the page fault was caused by a write but the page is write From patchwork Thu Jan 19 17:35:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13108444 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B145C38142 for ; Thu, 19 Jan 2023 17:36:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230225AbjASRga (ORCPT ); Thu, 19 Jan 2023 12:36:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230038AbjASRgV (ORCPT ); Thu, 19 Jan 2023 12:36:21 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 87CB18F7C6 for ; Thu, 19 Jan 2023 09:36:20 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id u7-20020a17090a410700b002291f69fbb5so1221611pjf.2 for ; Thu, 19 Jan 2023 09:36:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=c0SS1XLwsBFZ40upMmt6qRjU4c0qpX4m+4RA3lz+crc=; b=bRTUL9THslA1WY3C6zfQ1MAdQYPay9Jt3idYcFlPDFpT0P4o1+XdpDhUeIk2USmwqG D3KWQ2FFy3WJkFP9lARnX3e2e5+KhWdYytuOptPQtHynN/RZhg3lvRYJ1+tDBSadgtps wX3mEXowuzhodMcGekL2PsM6hfEXUHRivXQGF0Q29dG33K4Rv5+lelnHW3mcP/8E3OxY qNPsUHH4GMaFwBF+QLlhLNR1zHxYzEPnBpWfOlG2W1lShRazOpwLppupOCoEm3ciHxk3 OsbBJeAM+1GFVBPMGCcS7Q866gvCz+lnaIzAW6Qyf3Bq9YevZj2TcF9nD2jvuYi68kxn b2Mg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=c0SS1XLwsBFZ40upMmt6qRjU4c0qpX4m+4RA3lz+crc=; b=6lhZTj0XAA6MF9uuSTwjMZ7WF4xwRVNxzdDeKH4dvUWLG5UgXDAM6yjhyzVMo4DHoC NMS4MYilpwGlYHqHAC72jyTtRIEtLSM2uPveBW6fVJ/oj4PfbY1cA9gdXzIE5URmO7Jp NXby1XTy78+BstHEF6cBjUGxzhFXw6fspewAitt3xim5n3hhIvkrPXd4cyZmphdNzYZ7 3pFkKxslhTW1YG/GoYJfSjKeUIdRrEUjTwsG9ftfkSpoEKPa38ulqTi2dEF+0VFy85da YKlex65qmd5lQTiFWOJRvNIvttQ/Mq3sJF2TnJrzwMpbwzcmjdJA5CABekrRXcJ7EBMy DvJA== X-Gm-Message-State: AFqh2kqhxOC6sX9u+NmlV3wZZV0IlbBXIz4EsK08edw61dQi7Jetyygg 5MEUxPrrRVSGVXZaneciy6rYfUO1Ew76YA== X-Google-Smtp-Source: AMrXdXvbTmwzuJAuBPKJsyHR0snVOekSSTxs16QNlqhF8cG4iLKN5DvuKgwpJj8AOixc6btzxFQ4Hm+pmC9GRQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a62:ae10:0:b0:580:9b0b:4fde with SMTP id q16-20020a62ae10000000b005809b0b4fdemr1243710pff.49.1674149779734; Thu, 19 Jan 2023 09:36:19 -0800 (PST) Date: Thu, 19 Jan 2023 09:35:57 -0800 In-Reply-To: <20230119173559.2517103-1-dmatlack@google.com> Mime-Version: 1.0 References: <20230119173559.2517103-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog Message-ID: <20230119173559.2517103-6-dmatlack@google.com> Subject: [PATCH 5/7] KVM: x86/MMU: Use gfn_t in kvm_flush_remote_tlbs_range() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, David Matlack , Raghavendra Rao Ananta Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use gfn_t instead of u64 for the start_gfn parameter to kvm_flush_remote_tlbs_range(), since that is the standard type for GFNs throughout KVM. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/mmu_internal.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 36ce3110b7da..1e2c2d711dbb 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -246,7 +246,7 @@ static inline bool kvm_available_flush_tlb_with_range(void) return kvm_x86_ops.tlb_remote_flush_with_range; } -void kvm_flush_remote_tlbs_range(struct kvm *kvm, u64 start_gfn, u64 pages) +void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages) { struct kvm_tlb_range range; int ret = -EOPNOTSUPP; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index e606a6d5e040..851982a25502 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -164,7 +164,7 @@ void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, struct kvm_memory_slot *slot, u64 gfn, int min_level); -void kvm_flush_remote_tlbs_range(struct kvm *kvm, u64 start_gfn, u64 pages); +void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages); unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); extern int nx_huge_pages; From patchwork Thu Jan 19 17:35:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13108445 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34795C004D4 for ; Thu, 19 Jan 2023 17:36:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230287AbjASRge (ORCPT ); Thu, 19 Jan 2023 12:36:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230213AbjASRgX (ORCPT ); Thu, 19 Jan 2023 12:36:23 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC29E90841 for ; Thu, 19 Jan 2023 09:36:21 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id l15-20020a170902f68f00b001948ddc7cddso1738636plg.2 for ; Thu, 19 Jan 2023 09:36:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uRsWL56kSi67VtsSY7eQAHDzJKepa9d7c1+Ibj0vcQE=; b=gdGKLjSmqiX0cZ4pDCgzo2Y0Hyb2rjNgz7MfmYG1QliR3dntIqZmp0W8DNiCubrZO1 XsKZLH7MrwgaCgv+5+A2LrwtrkYwB/JhwPa0G9QSb1a+ZOBIs/d+0rd6/3sCTB3he/EF w+T3HbOsaFwiWm2ryF+AZedsM3RjPq7l/aAuD6VoXBPScK1ekZ0lYVASS6RGuJaLya4i y4D1SF2w+A4KK/c/bvLEOBKiq8/KzVXEUr+4XVpi7H/56swFVF2GGkxBdeI5TeL1PB5t UFJqbf3AxHGvKpyee1QezxYYmacb1TRuX4uDlTJBNOQNoQU7VIrHZHHOLMTZcAImPV3R mrNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uRsWL56kSi67VtsSY7eQAHDzJKepa9d7c1+Ibj0vcQE=; b=lPr4O/QFBNWwORYTLIfBypmw6msM4CW34OBFgK9l/lsyO4CAncCXMrBFMgxf/O3Ft0 SMPlwC91h6b9FITEFSd0jiBmHgJqtLgfAlsZHAj4W1HSkJb/mUAtWoh0Oalf+MVZPGEJ TVaghcTgwwwApSlCM8IjVjOeWieonxJC02LNgJPhRAmuKD8q8Dy9LedpH2QkGV721O/t qAlEcbzzrmbfFbezL+2TCy3jm0ZUTwATaTkoyE8XDx5AyAcpJ/NurewXIhN7wjGqOvrs aJW9j67W8+BoDOPXSMMSpJBJ/26AYvf7Dosy+zaCRhSBI5AosrNvM2lyRNs1RcZK/BIR hZ6Q== X-Gm-Message-State: AFqh2krebYEfcgmcfGXBbRl3DLCMY3zT0VagzBuwEGu80tYkDf2VkJea sQUKgJ+9QGD+1tUQydfaZw85of7HvfjYyg== X-Google-Smtp-Source: AMrXdXt+vZkjJ1zTNoZ5Qo+DVNwjuzxDWfY3Xw9nwrJNx+t6oQZNxW3uxoy7KZtRAAY7IFKfRsyg6/TxW0wYVw== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a63:3116:0:b0:477:7f69:2749 with SMTP id x22-20020a633116000000b004777f692749mr930497pgx.372.1674149781458; Thu, 19 Jan 2023 09:36:21 -0800 (PST) Date: Thu, 19 Jan 2023 09:35:58 -0800 In-Reply-To: <20230119173559.2517103-1-dmatlack@google.com> Mime-Version: 1.0 References: <20230119173559.2517103-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog Message-ID: <20230119173559.2517103-7-dmatlack@google.com> Subject: [PATCH 6/7] KVM: Allow range-based TLB invalidation from common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, David Matlack , Raghavendra Rao Ananta Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Make kvm_flush_remote_tlbs_range() visible in common code and create a default implementation that just invalidates the whole TLB. This paves the way for several future cleanups: - Introduction of range-based TLBI on ARM. - Eliminating kvm_arch_flush_remote_tlbs_memslot() - Moving the KVM/x86 TDP MMU to common code. No functional change intended. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_host.h | 3 +++ arch/x86/kvm/mmu/mmu.c | 5 ++--- arch/x86/kvm/mmu/mmu_internal.h | 1 - include/linux/kvm_host.h | 9 +++++++++ virt/kvm/kvm_main.c | 13 +++++++++++++ 5 files changed, 27 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 1bacc3de2432..420713ac8916 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1799,6 +1799,9 @@ static inline int kvm_arch_flush_remote_tlbs(struct kvm *kvm) return -ENOTSUPP; } +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages); + #define kvm_arch_pmi_in_guest(vcpu) \ ((vcpu) && (vcpu)->arch.handling_intr_from_guest) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1e2c2d711dbb..491c28d22cbe 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -246,7 +246,7 @@ static inline bool kvm_available_flush_tlb_with_range(void) return kvm_x86_ops.tlb_remote_flush_with_range; } -void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages) +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages) { struct kvm_tlb_range range; int ret = -EOPNOTSUPP; @@ -257,8 +257,7 @@ void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages) if (kvm_x86_ops.tlb_remote_flush_with_range) ret = static_call(kvm_x86_tlb_remote_flush_with_range)(kvm, &range); - if (ret) - kvm_flush_remote_tlbs(kvm); + return ret; } static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn, diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 851982a25502..d5599f2d3f96 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -164,7 +164,6 @@ void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, struct kvm_memory_slot *slot, u64 gfn, int min_level); -void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages); unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); extern int nx_huge_pages; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 76711afe4d17..acfb17d9b44d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1356,6 +1356,7 @@ int kvm_vcpu_yield_to(struct kvm_vcpu *target); void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu, bool usermode_vcpu_not_eligible); void kvm_flush_remote_tlbs(struct kvm *kvm); +void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 pages); #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min); @@ -1484,6 +1485,14 @@ static inline int kvm_arch_flush_remote_tlbs(struct kvm *kvm) } #endif +#ifndef __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE +static inline int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, + gfn_t gfn, u64 pages) +{ + return -EOPNOTSUPP; +} +#endif + #ifdef __KVM_HAVE_ARCH_NONCOHERENT_DMA void kvm_arch_register_noncoherent_dma(struct kvm *kvm); void kvm_arch_unregister_noncoherent_dma(struct kvm *kvm); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index fefd3e3c8fe1..c9fc693a39d9 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -368,6 +368,19 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) } EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs); +void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 pages) +{ + if (!kvm_arch_flush_remote_tlbs_range(kvm, gfn, pages)) + return; + + /* + * Fall back to a flushing entire TLBs if the architecture range-based + * TLB invalidation is unsupported or can't be performed for whatever + * reason. + */ + kvm_flush_remote_tlbs(kvm); +} + static void kvm_flush_shadow_all(struct kvm *kvm) { kvm_arch_flush_shadow_all(kvm); From patchwork Thu Jan 19 17:35:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13108446 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29A04C46467 for ; Thu, 19 Jan 2023 17:36:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230234AbjASRgi (ORCPT ); Thu, 19 Jan 2023 12:36:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230119AbjASRgb (ORCPT ); Thu, 19 Jan 2023 12:36:31 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 528934FADD for ; Thu, 19 Jan 2023 09:36:24 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id c5-20020aa78805000000b0058d983c708aso1223746pfo.22 for ; Thu, 19 Jan 2023 09:36:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dp8qojpwap4fYIHgXdz1D2JV/TwrrbRohd5yUhJybyQ=; b=H/BI9ESPrITtcSVcjET6jzac76aduirOzUyEK/YoKMEruClE2yiXCPYMGIOXcKnKup iNpiHxnBYLEIPj3wdADtupmxfC8G6McHZKEOSJ/CFCRRgt9Drd7uJsbcmWsUOqpAGTGD prwqqAb1u/JrmBIE75trjHaMsPYqGPmL5iWwr10EuZU6RUfJWI+nH6QH/oanl4yr2VfB 8AWG5j3mJRzwWi6EmTQkVE5eBisgW10Jpll7uujTrEF+bwnqufQonaA7EBanC72JhKpO G6vmCpkq2ED385zKBbRJDAZPo57ZEOS9Tta8r6YGPvYnqz6k9VQEb2t96Vh91/aHXN3/ 5nlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dp8qojpwap4fYIHgXdz1D2JV/TwrrbRohd5yUhJybyQ=; b=mZLi1QFg5ob2iAHsMZ5IKB7vMOAH+FEmF3SzxTkSfUw7cufpRMwfIKGWRXmZjdg9iM vYvYjqSbtaKgLB34SowlVjz3P0SOm7KA2XeoREIqcuGtT2Yk2xwYxkV9iA9tmNDZxuXd z7bsWTq9ticuGgew6hW2bHOlvMnq0TWImpU9NJuR0eHGE4MwtaRQesjYJtBAzpBHmmaT 2uAvRk6yQ79GBKvN8yD/DTbhuMrjjAvggp0oCCKy3KudwBjAt+nbkxdmAPukREGecWjz DhOs13bN4q/w20PnN07IQ7TNJ8QRqwVMgQ82xdJU045pUTPPxKGeq6/xFE6yrYqZ5GfB wrJg== X-Gm-Message-State: AFqh2kptwvbEUdaPnvHMZbxaiZFlIVefgJ5BA5/XvogsQQ0ZNcrnkKHD gGoKeXHg8tj/qaL4gUfDYxJNXlfm/KTYsA== X-Google-Smtp-Source: AMrXdXtaXzJMSZZ9D1fSSj2h0Tgh0mFBeEt+RGNJ3TxELmJ4QuEIMQy48RX0nEqQ+628tznT8bbHFGEd6DobHQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a17:90b:1494:b0:225:eaa2:3f5d with SMTP id js20-20020a17090b149400b00225eaa23f5dmr486845pjb.2.1674149783183; Thu, 19 Jan 2023 09:36:23 -0800 (PST) Date: Thu, 19 Jan 2023 09:35:59 -0800 In-Reply-To: <20230119173559.2517103-1-dmatlack@google.com> Mime-Version: 1.0 References: <20230119173559.2517103-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog Message-ID: <20230119173559.2517103-8-dmatlack@google.com> Subject: [PATCH 7/7] KVM: Move kvm_arch_flush_remote_tlbs_memslot() to common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, David Matlack , Raghavendra Rao Ananta Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move kvm_arch_flush_remote_tlbs_memslot() to common code and drop "arch_" from the name. kvm_arch_flush_remote_tlbs_memslot() is just a range-based TLB invalidation where the range is defined by the memslot. Now that kvm_flush_remote_tlbs_range() can be called from common code we can just use that and drop a bunch of duplicate code from the arch directories. Note this adds a lockdep assertion for slots_lock being held when calling kvm_flush_remote_tlbs_memslot(), which was previously only asserted on x86. MIPS has calls to kvm_flush_remote_tlbs_memslot(), but they all hold the slots_lock, so the lockdep assertion continues to hold true. Also drop the CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT ifdef gating kvm_flush_remote_tlbs_memslot(), since it is no longer necessary. Signed-off-by: David Matlack --- arch/arm64/kvm/arm.c | 6 ------ arch/mips/kvm/mips.c | 10 ++-------- arch/riscv/kvm/mmu.c | 6 ------ arch/x86/kvm/mmu/mmu.c | 16 +--------------- arch/x86/kvm/x86.c | 2 +- include/linux/kvm_host.h | 7 +++---- virt/kvm/kvm_main.c | 18 ++++++++++++++++-- 7 files changed, 23 insertions(+), 42 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 698787ed87e9..54d5d0733b98 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1420,12 +1420,6 @@ void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) } -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot) -{ - kvm_flush_remote_tlbs(kvm); -} - static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm, struct kvm_arm_device_addr *dev_addr) { diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index 2e54e5fd8daa..9f9a7ba7eb2b 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -199,7 +199,7 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm, /* Flush slot from GPA */ kvm_mips_flush_gpa_pt(kvm, slot->base_gfn, slot->base_gfn + slot->npages - 1); - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + kvm_flush_remote_tlbs_memslot(kvm, slot); spin_unlock(&kvm->mmu_lock); } @@ -235,7 +235,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, needs_flush = kvm_mips_mkclean_gpa_pt(kvm, new->base_gfn, new->base_gfn + new->npages - 1); if (needs_flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, new); + kvm_flush_remote_tlbs_memslot(kvm, new); spin_unlock(&kvm->mmu_lock); } } @@ -987,12 +987,6 @@ int kvm_arch_flush_remote_tlbs(struct kvm *kvm) return 1; } -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot) -{ - kvm_flush_remote_tlbs(kvm); -} - long kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { long r; diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 66ef19676fe4..87f30487f59f 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -406,12 +406,6 @@ void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) { } -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot) -{ - kvm_flush_remote_tlbs(kvm); -} - void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free) { } diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 491c28d22cbe..4af85888c98b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6528,7 +6528,7 @@ static void kvm_rmap_zap_collapsible_sptes(struct kvm *kvm, */ if (slot_handle_level(kvm, slot, kvm_mmu_zap_collapsible_spte, PG_LEVEL_4K, KVM_MAX_HUGEPAGE_LEVEL - 1, true)) - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + kvm_flush_remote_tlbs_memslot(kvm, slot); } void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, @@ -6547,20 +6547,6 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, } } -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot) -{ - /* - * All current use cases for flushing the TLBs for a specific memslot - * related to dirty logging, and many do the TLB flush out of mmu_lock. - * The interaction between the various operations on memslot must be - * serialized by slots_locks to ensure the TLB flush from one operation - * is observed by any other operation on the same memslot. - */ - lockdep_assert_held(&kvm->slots_lock); - kvm_flush_remote_tlbs_range(kvm, memslot->base_gfn, memslot->npages); -} - void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, const struct kvm_memory_slot *memslot) { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 508074e47bc0..ea7bb4035a60 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12617,7 +12617,7 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm, * See is_writable_pte() for more details (the case involving * access-tracked SPTEs is particularly relevant). */ - kvm_arch_flush_remote_tlbs_memslot(kvm, new); + kvm_flush_remote_tlbs_memslot(kvm, new); } } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index acfb17d9b44d..12dfecd27c9d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1357,6 +1357,8 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu, bool usermode_vcpu_not_eligible); void kvm_flush_remote_tlbs(struct kvm *kvm); void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 pages); +void kvm_flush_remote_tlbs_memslot(struct kvm *kvm, + const struct kvm_memory_slot *memslot); #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min); @@ -1385,10 +1387,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, unsigned long mask); void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot); -#ifdef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot); -#else /* !CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT */ +#ifndef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log); int kvm_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log, int *is_dirty, struct kvm_memory_slot **memslot); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index c9fc693a39d9..9c10cd191a71 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -381,6 +381,20 @@ void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 pages) kvm_flush_remote_tlbs(kvm); } +void kvm_flush_remote_tlbs_memslot(struct kvm *kvm, + const struct kvm_memory_slot *memslot) +{ + /* + * All current use cases for flushing the TLBs for a specific memslot + * related to dirty logging, and many do the TLB flush out of mmu_lock. + * The interaction between the various operations on memslot must be + * serialized by slots_locks to ensure the TLB flush from one operation + * is observed by any other operation on the same memslot. + */ + lockdep_assert_held(&kvm->slots_lock); + kvm_flush_remote_tlbs_range(kvm, memslot->base_gfn, memslot->npages); +} + static void kvm_flush_shadow_all(struct kvm *kvm) { kvm_arch_flush_shadow_all(kvm); @@ -2188,7 +2202,7 @@ static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log) } if (flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, memslot); + kvm_flush_remote_tlbs_memslot(kvm, memslot); if (copy_to_user(log->dirty_bitmap, dirty_bitmap_buffer, n)) return -EFAULT; @@ -2305,7 +2319,7 @@ static int kvm_clear_dirty_log_protect(struct kvm *kvm, KVM_MMU_UNLOCK(kvm); if (flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, memslot); + kvm_flush_remote_tlbs_memslot(kvm, memslot); return 0; }