From patchwork Tue Aug 8 23:13:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13347224 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0F1E3C001DB for ; Tue, 8 Aug 2023 23:15:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=IKQ4G5zYl4Vo7m/FDPx/sBpq04HrVtLEL29Rl2y4Wpo=; b=mgI3WimmNcaeok8kP8deODS9ia Ff7NYaSsKRC7Eq6GEfyVMmv3HtEluIv1s4JvDbYrhMVVBXS9fFcvLxiCeoKyKtTProAmiFRG8CC3r fBVeLdUthb8qxsaF+kHYl5o5WuS2b/20FYY+d+Irwqxr1Xvzm1Aqo5q+qr2wZbyM9CApXLpC35wjS c65q3QtUdf1FHzXKfCS63WkvQ6rlhdAJA2JEI7l5jXvwahFMOswhEu7rVMYUQq6QUWB3JnmbxD9W4 ZWzt9saGdSv+idTVV/AOavE0mVOAF/hT9rf0YrHEtpCVYnyewygLOUAp+8VJKxvwqMryFuXPOzhlz 8n056alg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qTVuV-003eAN-0C; Tue, 08 Aug 2023 23:14:43 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qTVtk-003dNC-30 for linux-arm-kernel@bombadil.infradead.org; Tue, 08 Aug 2023 23:13:57 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=EbIRM778vEtJ8vWvhEMC2KLHHOEdIHGrq5z82YRomuQ=; b=YKKquQ+zUs7pNkZ+9vNQvLkgey psvKdwvni4hP6t5t//Q+ZmLJ4RuAk7yDM4yU0ljfGtY05MJi8SNQniKUJdrRvgbyHRlO603v1YMz8 jfLnEhD2bPDXeIg7eTkhwFEWQWcwBZvORmm+MY259KmpvPJgAycnMZa5Pafj6EhzV+ABWOV8NGUiC Rsb2eI0zs+LwGZhGFCIb9cDI+KtlN1BaufFun+PpIfh2vhH4We0YtLmY4KbajRcXowYqdQavFtDdo IwLIy+Hz7I4dYFoGgPSKRPubyGfloTKBYPnGbsSx+53Ap4qD5LpWb+1ODgXcwYiMrVQg0EtdAJ7lO n/TPKv7A==; Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qTVtW-0057ln-0d for linux-arm-kernel@lists.infradead.org; Tue, 08 Aug 2023 23:13:44 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-56942667393so77743987b3.2 for ; Tue, 08 Aug 2023 16:13:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691536420; x=1692141220; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EbIRM778vEtJ8vWvhEMC2KLHHOEdIHGrq5z82YRomuQ=; b=bKvtNHGSvuJp2zqnSdoMkiGVpjkiGLOxNjSdkIgER1Zdna1MWeHTf9G9gAIOW6bDNu ZJm9xaGVD14fHa3fYBaKXTeFMtYpHZo20pdswwAUkBImXBacCufsZFDb5raNvi4h+28W i3AOuKTe1lqg90upzZfwofEHk1NTsWkytPLN6wQ/qeOLTv8ZxQQLGXH6Q0qxQz1An7xM Ku6ZWNtvVot/oo9ROsoKcbt4dAQdiDq8etGt/0YX943OPvbH+t/iOzucdKC/SFHiFYb7 OTWFNaQW3q3Otg2LOg2Qd4esvihImxDOuOGWjRln2q12LolXg4KMvnUbhd4/HPRsTcfv Sh+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691536420; x=1692141220; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EbIRM778vEtJ8vWvhEMC2KLHHOEdIHGrq5z82YRomuQ=; b=QT1IM9HEDSJIzIo3abr/2gGwNutoez5UgvZciJneMNgRFrbAfT2vKEwJq6/aJ5LF6g IOES6Mfo1SzB8F+dhzJpTmSlZWIYDlnGplAz8s++TJoMqhslaT6OK2NmYuiZIO1T4EpZ tCt2NjEetJK5uQ7pv/atVX/RPJZsWiH3MbyQ0et9Il1S//qddIKWdMyiw9gXo6wbsufc cNIxu1R47LyDYIS5y0KpC1H/XPydf03G/IUgimMhAiOhZ9JTocMDyDMs8ViiMuQOhmq4 BtkgaKCFYxhpAYODu/FBZsRvf4ZwVIm/M3nrdOW9bXDp60w+a/iHx2k3lu9I66zRuAdV n1DQ== X-Gm-Message-State: AOJu0Yxa1c6646YPKI1KYZcUkq9tcJFhFsy95jXT9Lw1ln++OC1TojMo PsadcnrtTZMkIfOFG8arZn/yxeuhHEq6 X-Google-Smtp-Source: AGHT+IHaShLlv9O1yBtKO3H2sibMGhr0bad7TgyyGlfrw9JZmZ9g7Xq+c+3UitlQEGZVwD9RTySQ7Yzi+Jp+ X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:860e:0:b0:d5d:511b:16da with SMTP id y14-20020a25860e000000b00d5d511b16damr19364ybk.2.1691536419844; Tue, 08 Aug 2023 16:13:39 -0700 (PDT) Date: Tue, 8 Aug 2023 23:13:21 +0000 In-Reply-To: <20230808231330.3855936-1-rananta@google.com> Mime-Version: 1.0 References: <20230808231330.3855936-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808231330.3855936-6-rananta@google.com> Subject: [PATCH v8 05/14] KVM: Allow range-based TLB invalidation from common code From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Gavin Shan , Shaoqin Huang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230809_001342_550448_EB45A873 X-CRM114-Status: GOOD ( 15.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: David Matlack Make kvm_flush_remote_tlbs_range() visible in common code and create a default implementation that just invalidates the whole TLB. This paves the way for several future features/cleanups: - Introduction of range-based TLBI on ARM. - Eliminating kvm_arch_flush_remote_tlbs_memslot() - Moving the KVM/x86 TDP MMU to common code. No functional change intended. Signed-off-by: David Matlack Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Gavin Shan Reviewed-by: Shaoqin Huang Reviewed-by: Anup Patel --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/mmu/mmu.c | 8 ++++---- arch/x86/kvm/mmu/mmu_internal.h | 3 --- include/linux/kvm_host.h | 12 ++++++++++++ virt/kvm/kvm_main.c | 13 +++++++++++++ 5 files changed, 31 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index a2d3cfc2eb75c..b547d17c58f63 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1804,6 +1804,8 @@ static inline int kvm_arch_flush_remote_tlbs(struct kvm *kvm) return -ENOTSUPP; } +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE + #define kvm_arch_pmi_in_guest(vcpu) \ ((vcpu) && (vcpu)->arch.handling_intr_from_guest) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ec169f5c7dce2..6adbe6c870982 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -278,16 +278,16 @@ static inline bool kvm_available_flush_remote_tlbs_range(void) return kvm_x86_ops.flush_remote_tlbs_range; } -void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, - gfn_t nr_pages) +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, + u64 nr_pages) { int ret = -EOPNOTSUPP; if (kvm_x86_ops.flush_remote_tlbs_range) ret = static_call(kvm_x86_flush_remote_tlbs_range)(kvm, start_gfn, nr_pages); - if (ret) - kvm_flush_remote_tlbs(kvm); + + return ret; } static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index d39af5639ce97..86cb83bb34804 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -170,9 +170,6 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, struct kvm_memory_slot *slot, u64 gfn, int min_level); -void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, - gfn_t nr_pages); - /* Flush the given page (huge or not) of guest memory. */ static inline void kvm_flush_remote_tlbs_gfn(struct kvm *kvm, gfn_t gfn, int level) { diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ade5d4500c2ce..f0be5d9c37dd1 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1359,6 +1359,7 @@ int kvm_vcpu_yield_to(struct kvm_vcpu *target); void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu, bool yield_to_kernel_mode); void kvm_flush_remote_tlbs(struct kvm *kvm); +void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages); #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min); @@ -1488,6 +1489,17 @@ static inline int kvm_arch_flush_remote_tlbs(struct kvm *kvm) int kvm_arch_flush_remote_tlbs(struct kvm *kvm); #endif +#ifndef __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE +static inline int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, + gfn_t gfn, u64 nr_pages) +{ + return -EOPNOTSUPP; +} +#else +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, + gfn_t gfn, u64 nr_pages); +#endif + #ifdef __KVM_HAVE_ARCH_NONCOHERENT_DMA void kvm_arch_register_noncoherent_dma(struct kvm *kvm); void kvm_arch_unregister_noncoherent_dma(struct kvm *kvm); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d6b0507861550..26e91000f579d 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -366,6 +366,19 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) } EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs); +void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages) +{ + if (!kvm_arch_flush_remote_tlbs_range(kvm, gfn, nr_pages)) + return; + + /* + * Fall back to a flushing entire TLBs if the architecture range-based + * TLB invalidation is unsupported or can't be performed for whatever + * reason. + */ + kvm_flush_remote_tlbs(kvm); +} + static void kvm_flush_shadow_all(struct kvm *kvm) { kvm_arch_flush_shadow_all(kvm);