From patchwork Tue Aug 8 23:13:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13347183 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D678C41513 for ; Tue, 8 Aug 2023 23:13:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231713AbjHHXNi (ORCPT ); Tue, 8 Aug 2023 19:13:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231165AbjHHXNi (ORCPT ); Tue, 8 Aug 2023 19:13:38 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BCC6019AD for ; Tue, 8 Aug 2023 16:13:36 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d4cbf6b593bso4056260276.0 for ; Tue, 08 Aug 2023 16:13:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691536416; x=1692141216; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=72O4JjoYZq3iLcv3bAZ3OmpFstwA2kMJRyaM4BuQEBE=; b=R5I5Z7dKsQM2VswHYUX2hskpPb2N83EoKFoKuLJn/BC4sXMaBnhcj99WCsAp5QKKyy 7u+T5Ec3pR/25gv1e/zbEjRKzrqzDDJBAemw6R3xQC7KrPp7aJZJM4G22QpoAC0zp5kH UMXBmcZb9hsdnj+lbnSFw34hakifAJ7K53PQunN9SGCFq6Ui1dwSKqTlD9zEHfhXeHeP 0bj76PocHyecHuyM+Ns81rvZC/L72J5BAdxXht2Ch/UnatoKkEOwofuDLlnsgpQl5qOl rYtHj2q1IH0XKliwZJi6cxMJX+B5xoJtpcJNKcE7br9I1vSodIwSHZA3qcZXR3rrOTTD QFEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691536416; x=1692141216; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=72O4JjoYZq3iLcv3bAZ3OmpFstwA2kMJRyaM4BuQEBE=; b=F9Jrigh0e2qhmcEUdW6gu5nk6cgOFJpTbtpFx+9CUtJmjgDx5ZibgYArYUoGopA+Fs O5VQIgIbI89myUauipqlK85B/4/YmvIIxFvN+bI80DdFP+7h+DpZEDGxq+tt5YlrpHGc Tmm19YynBsiLhA2OBPvLuNUsXg2LpSgQGFgyEZ7m+4s56obAG1dQRI14YF1oz/BQcSOj ce5LdjbrfzpgUPemvX5oGG+iqrP3yb+z3Fwn+SV/dqBGhz07TFirYr50ZZWsMEeoKSDI 5G8T1UemMbNgrDS7zNhlw4RQyJuKk8JPZRmxL9ON4YN/J/EouG4t2IyZ2Te4/ockeYzy +lag== X-Gm-Message-State: AOJu0YzoaV3KBTUfq6P4EIRx0mXELjZfFuZ2900LHOVjH1b5rBQnLdZL +72tnLhKVZG0AWcurXVrztY8pnE8LhR9 X-Google-Smtp-Source: AGHT+IE7fclKfZoTRFj3+JRtM/8w0lKPDLMergkPyAZ7GDrfovjgxJMs9LxY7VD7kflVGqJRYcLb7SZ/s4nQ X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:b94:0:b0:d4e:e2ae:e0c8 with SMTP id 142-20020a250b94000000b00d4ee2aee0c8mr21899ybl.1.1691536416062; Tue, 08 Aug 2023 16:13:36 -0700 (PDT) Date: Tue, 8 Aug 2023 23:13:17 +0000 In-Reply-To: <20230808231330.3855936-1-rananta@google.com> Mime-Version: 1.0 References: <20230808231330.3855936-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808231330.3855936-2-rananta@google.com> Subject: [PATCH v8 01/14] KVM: Rename kvm_arch_flush_remote_tlb() to kvm_arch_flush_remote_tlbs() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Gavin Shan , " =?utf-8?q?Philippe_Ma?= =?utf-8?q?thieu-Daud=C3=A9?= " , Shaoqin Huang Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: David Matlack Rename kvm_arch_flush_remote_tlb() and the associated macro __KVM_HAVE_ARCH_FLUSH_REMOTE_TLB to kvm_arch_flush_remote_tlbs() and __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS respectively. Making the name plural matches kvm_flush_remote_tlbs() and makes it more clear that this function can affect more than one remote TLB. No functional change intended. Signed-off-by: David Matlack Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Gavin Shan Reviewed-by: Philippe Mathieu-Daudé Reviewed-by: Shaoqin Huang --- arch/mips/include/asm/kvm_host.h | 4 ++-- arch/mips/kvm/mips.c | 2 +- arch/x86/include/asm/kvm_host.h | 4 ++-- include/linux/kvm_host.h | 4 ++-- virt/kvm/kvm_main.c | 2 +- 5 files changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h index 04cedf9f88115..9b0ad8f3bf327 100644 --- a/arch/mips/include/asm/kvm_host.h +++ b/arch/mips/include/asm/kvm_host.h @@ -896,7 +896,7 @@ static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {} -#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLB -int kvm_arch_flush_remote_tlb(struct kvm *kvm); +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS +int kvm_arch_flush_remote_tlbs(struct kvm *kvm); #endif /* __MIPS_KVM_HOST_H__ */ diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index aa5583a7b05be..4b7bc39a41736 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -981,7 +981,7 @@ void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) } -int kvm_arch_flush_remote_tlb(struct kvm *kvm) +int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { kvm_mips_callbacks->prepare_flush_shadow(kvm); return 1; diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 28bd38303d704..a2d3cfc2eb75c 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1794,8 +1794,8 @@ static inline struct kvm *kvm_arch_alloc_vm(void) #define __KVM_HAVE_ARCH_VM_FREE void kvm_arch_free_vm(struct kvm *kvm); -#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLB -static inline int kvm_arch_flush_remote_tlb(struct kvm *kvm) +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS +static inline int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { if (kvm_x86_ops.flush_remote_tlbs && !static_call(kvm_x86_flush_remote_tlbs)(kvm)) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 9d3ac7720da9f..e3f968b38ae97 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1479,8 +1479,8 @@ static inline void kvm_arch_free_vm(struct kvm *kvm) } #endif -#ifndef __KVM_HAVE_ARCH_FLUSH_REMOTE_TLB -static inline int kvm_arch_flush_remote_tlb(struct kvm *kvm) +#ifndef __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS +static inline int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { return -ENOTSUPP; } diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index dfbaafbe3a009..70e5479797ac3 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -361,7 +361,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) * kvm_make_all_cpus_request() reads vcpu->mode. We reuse that * barrier here. */ - if (!kvm_arch_flush_remote_tlb(kvm) + if (!kvm_arch_flush_remote_tlbs(kvm) || kvm_make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH)) ++kvm->stat.generic.remote_tlb_flush; } From patchwork Tue Aug 8 23:13:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13347184 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA909C04FE2 for ; Tue, 8 Aug 2023 23:13:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231178AbjHHXNk (ORCPT ); Tue, 8 Aug 2023 19:13:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231472AbjHHXNi (ORCPT ); Tue, 8 Aug 2023 19:13:38 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D363E19A0 for ; Tue, 8 Aug 2023 16:13:37 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d4e1be2dd10so4149093276.0 for ; Tue, 08 Aug 2023 16:13:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691536417; x=1692141217; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=fF9z3Q1T3iKa1+NaqgmsAKQ9ZnVImh9YcGmD8OzK9j4=; b=Ld1x0UAAPCU2iNVMHuMjG6B0mRqC01G5XmrR49OD5ekBM+hQ47Lz1vPTQOE9Tr5Br5 BX4nU2fyQoGGqakeZ7MocsuWSlLwxNgdhGBwiNTVlmTbC+b+hhEmvF5KQzcDGXQHniHf dyBlM/h1HcV1T3N3V+nLOLW7Ezrkt/Pno3Kgysbh2Q6+afbzKckk0AnL00fLjdc2sRnT VlzSKBNqhrU3kXNgERrGdXMA6yHTAQeK1SKxpS4/hFwjbsJV3fXh7O3sMLhqgXSqH2/5 7bx3QZPwlmmukhHDNyNKKv+4o0fEeYSAzOCOzLhRc8WynuLwSpq0v0AjFElLDtSI3e7L E4XQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691536417; x=1692141217; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=fF9z3Q1T3iKa1+NaqgmsAKQ9ZnVImh9YcGmD8OzK9j4=; b=WRHn/l0d9tf4Wk31+4vxBWtn2YmrFH6WRLA7myDpV5M1WQuatt0turjBRUY5dcic9u 2dZgvU2oxb9uWbySlZiBj/jt39KGEm0vYBunoX7kA9evs/NsjLmr0s7bkXytdT6mCjCz /UkNYycQ0yrm0ztf0HR+Lx3SN6ejO8sLbyqSH65SDoouXdEvjvw8vmU73gwj8gYO21hu fWrDdffB49dh6FBDcF+B1hMSeaA9pZVosTARymOfbtDv/JvnDiRfjaA0dRjunFSusr1a QkC9j+HIBA5orAG+P32gNGFZKLSAc4CMJK5HfTHGL01e2I/dPqdz43Rl2VDKg/fHf3Yf bRgA== X-Gm-Message-State: AOJu0Yy2orI+f5XmEnfOoBzbgwRXr/9n2zxyOwTl7g6ud6eyZbWNjYey 6B5up0WTLJ7CP86oboaHHVtwX6Q7UZdZ X-Google-Smtp-Source: AGHT+IEkNRtlsA4ms3b9+CUTzSiCBd52BXOcAxUcg/jLLvdvbtpcQ/Jup7ZubXa24xkvWpmhph0yYPJIU1oJ X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:3288:0:b0:d44:585f:dfa8 with SMTP id y130-20020a253288000000b00d44585fdfa8mr21095yby.0.1691536417147; Tue, 08 Aug 2023 16:13:37 -0700 (PDT) Date: Tue, 8 Aug 2023 23:13:18 +0000 In-Reply-To: <20230808231330.3855936-1-rananta@google.com> Mime-Version: 1.0 References: <20230808231330.3855936-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808231330.3855936-3-rananta@google.com> Subject: [PATCH v8 02/14] KVM: Declare kvm_arch_flush_remote_tlbs() globally From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org There's no reason for the architectures to declare kvm_arch_flush_remote_tlbs() in their own headers. Hence to avoid this duplication, make the declaration global, leaving the architectures to define only __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS as needed. Signed-off-by: Raghavendra Rao Ananta Signed-off-by: Sean Christopherson Reviewed-by: Shaoqin Huang --- arch/mips/include/asm/kvm_host.h | 1 - include/linux/kvm_host.h | 2 ++ 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h index 9b0ad8f3bf327..54a85f1d4f2c8 100644 --- a/arch/mips/include/asm/kvm_host.h +++ b/arch/mips/include/asm/kvm_host.h @@ -897,6 +897,5 @@ static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {} #define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS -int kvm_arch_flush_remote_tlbs(struct kvm *kvm); #endif /* __MIPS_KVM_HOST_H__ */ diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index e3f968b38ae97..ade5d4500c2ce 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1484,6 +1484,8 @@ static inline int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { return -ENOTSUPP; } +#else +int kvm_arch_flush_remote_tlbs(struct kvm *kvm); #endif #ifdef __KVM_HAVE_ARCH_NONCOHERENT_DMA From patchwork Tue Aug 8 23:13:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13347182 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2C02C04FDF for ; Tue, 8 Aug 2023 23:13:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231776AbjHHXNk (ORCPT ); Tue, 8 Aug 2023 19:13:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54864 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231738AbjHHXNj (ORCPT ); Tue, 8 Aug 2023 19:13:39 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B1A4919A1 for ; Tue, 8 Aug 2023 16:13:38 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d11f35a0d5cso7292573276.1 for ; Tue, 08 Aug 2023 16:13:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691536418; x=1692141218; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=yofTwj8ZWcGjgw9v2JlEZNUSzmrec+LwekHHUJcpAoM=; b=dpvxAljlEqRicljPQxNOJb+MByUQynWkaoJYtNkwzaTYBVofoDLxhXQ4NUQQ/PzLSf NAtCKDbb8oJ0I+3uBtvhYWPM5zzVcZGZW3pDxQOMxKp+lvQ7BjdFrNc/q/apmXEYUx/b QhVGqmoFiocVQByOfmb4QZxlbWrg6Ieus4zhxHuCqIJKlof4GWqyS/gC4YZttzjEIT/h Whg24RDAmIE9PEXxBdCoM/i4l/H27VeCyO3o8GhqYAZYL0To+EyyDC+2J360ckq2AoBz pk0d/gNWDg05ec8YE3MwSGw7fdP5YwY4vaL8VuAGMAaNIrbzBVJmLTs6Pw2oc6uWyPcm NgSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691536418; x=1692141218; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yofTwj8ZWcGjgw9v2JlEZNUSzmrec+LwekHHUJcpAoM=; b=Nyq2J5CUGDWRlBh7cXdh/L7okBS4GExRJbMdL8UzpzWehgPy8Aq3qwZfRO8UtUr2b+ GiOnesAdk7WimU1ZoDdxD2w2E88J4IqGU0YGutdZBioyjMrK7ptci3ewgFggfwj+FRmc Uw4Btlj0/NeFsZZLfvGaqZn0OFRMjg9nzvj1iTfEbO6mQVBwxJfTEOs5mWkjKHAM/0D+ AZt+3akSXVfqlqEmD87wYqaqbbHJOrjeth2/FcVOBSSD6HU34Xuw2x2rVPblQF5/Vo89 s4ONQYGs+bRwyGMW4M2YvBwOwBafrwXqWikGwwJYF5MlkolliginePmg9W6aIVauAhpZ gI4Q== X-Gm-Message-State: AOJu0Ywl7egjMzbppu0px6D/7is1Os526cm9YyUc/PbPJyGGfY9KM6e0 TCanmwtHdVgrHY+lIPMHg5YmRrk6PQo5 X-Google-Smtp-Source: AGHT+IEMoPWfeZ+OLCK2D9UT69oNcAo0brgl53831EkFIxzA/BaKPU3QvQO6q12mm6VixOduKcfqU57b7CIQ X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:e90b:0:b0:d35:bf85:5aa0 with SMTP id n11-20020a25e90b000000b00d35bf855aa0mr23366ybd.4.1691536418029; Tue, 08 Aug 2023 16:13:38 -0700 (PDT) Date: Tue, 8 Aug 2023 23:13:19 +0000 In-Reply-To: <20230808231330.3855936-1-rananta@google.com> Mime-Version: 1.0 References: <20230808231330.3855936-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808231330.3855936-4-rananta@google.com> Subject: [PATCH v8 03/14] KVM: arm64: Use kvm_arch_flush_remote_tlbs() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Shaoqin Huang Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org Stop depending on CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL and opt to standardize on kvm_arch_flush_remote_tlbs() since it avoids duplicating the generic TLB stats across architectures that implement their own remote TLB flush. This adds an extra function call to the ARM64 kvm_flush_remote_tlbs() path, but that is a small cost in comparison to flushing remote TLBs. In addition, instead of just incrementing remote_tlb_flush_requests stat, the generic interface would also increment the remote_tlb_flush stat. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Shaoqin Huang Reviewed-by: Gavin Shan --- arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/kvm/Kconfig | 1 - arch/arm64/kvm/mmu.c | 6 +++--- 3 files changed, 5 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 8b6096753740c..20f2ba149c70c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1111,6 +1111,8 @@ int __init kvm_set_ipa_limit(void); #define __KVM_HAVE_ARCH_VM_ALLOC struct kvm *kvm_arch_alloc_vm(void); +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS + static inline bool kvm_vm_is_protected(struct kvm *kvm) { return false; diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index f531da6b362e9..6b730fcfee379 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -25,7 +25,6 @@ menuconfig KVM select MMU_NOTIFIER select PREEMPT_NOTIFIERS select HAVE_KVM_CPU_RELAX_INTERCEPT - select HAVE_KVM_ARCH_TLB_FLUSH_ALL select KVM_MMIO select KVM_GENERIC_DIRTYLOG_READ_PROTECT select KVM_XFER_TO_GUEST_WORK diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 6db9ef288ec38..0ac721fa27f18 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -161,15 +161,15 @@ static bool memslot_is_logging(struct kvm_memory_slot *memslot) } /** - * kvm_flush_remote_tlbs() - flush all VM TLB entries for v7/8 + * kvm_arch_flush_remote_tlbs() - flush all VM TLB entries for v7/8 * @kvm: pointer to kvm structure. * * Interface to HYP function to flush all VM TLB entries */ -void kvm_flush_remote_tlbs(struct kvm *kvm) +int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { - ++kvm->stat.generic.remote_tlb_flush_requests; kvm_call_hyp(__kvm_tlb_flush_vmid, &kvm->arch.mmu); + return 0; } static bool kvm_is_device_pfn(unsigned long pfn) From patchwork Tue Aug 8 23:13:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13347185 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0151DC001DB for ; Tue, 8 Aug 2023 23:13:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231738AbjHHXNk (ORCPT ); Tue, 8 Aug 2023 19:13:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39454 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231748AbjHHXNk (ORCPT ); Tue, 8 Aug 2023 19:13:40 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AFE6A199F for ; Tue, 8 Aug 2023 16:13:39 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d47a02fc63fso4747519276.0 for ; Tue, 08 Aug 2023 16:13:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691536419; x=1692141219; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rRsJm5KN8y0omlFlyBVh6FOC8Y/5Pdk4ttGOceHQNGg=; b=CQf/TxanV6OEd32ELCfkkts3XxnHA4pKxGHkzWqcVZLhDHcGXQK8pxhlEmtKwP/tpP UAyUov8OA4rqJcX0xB4frWPFkoC8zGz0reXRyqNoFb6QQc7uiAJt57EURlLsfXhtvSua dVYnrRYGo2ZXskRE7T0A21bujPL542X8vcR00OPyT91QR3v88w0f/6FbYWXk1Xzn0erM sTAg25XQTO1XYYl95Z33YL5N1hptl2XY6pPHEzG2zsJje3brq/4ttctRSjL4Be7i5xzH VbfCPWiV7RKvMZQeOm7dW2jfJtV7whha+buhr6WES83vayzqgg3bRBsrouX7l2lyMHB5 9ejw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691536419; x=1692141219; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rRsJm5KN8y0omlFlyBVh6FOC8Y/5Pdk4ttGOceHQNGg=; b=jSXvfagfnzp78v1pKiWX23rVjvcaqVWWT0Za9hltLBRD+DWGHz61zpjpeW7K+tuizC nmv4dIP4yBcRnjmVrw9ZXR5ycyUG2cpBkL0ayGKf/pIGnGiwhuwvR47brGaHgpNID9wI XuABYY/Cwc379q2+BoEhzJz6CRT9umt/O3L0EGUSKhdP5CWZNUnWZIvUHFX1xZfHhMb2 jv3xtcPoNz6xg9LAl+yGOvuhozJMY/1PzU/o8GS3iGwrTUafIa+YKV7JVO5dOF6LgQWQ FxMCyv3kar7KL5NsX+u3lGr2d0lsPovSP3jEeL61oikFjuKpb+mLwjRFfYEIgrMcwAfV ktiw== X-Gm-Message-State: AOJu0Yw2tHKScLFLH2kbBMHfijyxyeAwhCoegBVoRhfyke68xzC0enBj /XfswQ47DYGRQ1pTJ0KCYwrJZsGuRHdZ X-Google-Smtp-Source: AGHT+IFGVRqil1pLKlKZOPozuSk7xjlyKiysAnrSD74F/QH1sqvC3Ds73i/UAyOLkTltrm9Gi6kXcopawKrT X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6902:1788:b0:d45:1b81:1154 with SMTP id ca8-20020a056902178800b00d451b811154mr21428ybb.2.1691536419023; Tue, 08 Aug 2023 16:13:39 -0700 (PDT) Date: Tue, 8 Aug 2023 23:13:20 +0000 In-Reply-To: <20230808231330.3855936-1-rananta@google.com> Mime-Version: 1.0 References: <20230808231330.3855936-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808231330.3855936-5-rananta@google.com> Subject: [PATCH v8 04/14] KVM: Remove CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Shaoqin Huang Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org kvm_arch_flush_remote_tlbs() or CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL are two mechanisms to solve the same problem, allowing architecture-specific code to provide a non-IPI implementation of remote TLB flushing. Dropping CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL allows KVM to standardize all architectures on kvm_arch_flush_remote_tlbs() instead of maintaining two mechanisms. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Shaoqin Huang Reviewed-by: Gavin Shan Reviewed-by: Philippe Mathieu-Daudé --- virt/kvm/Kconfig | 3 --- virt/kvm/kvm_main.c | 2 -- 2 files changed, 5 deletions(-) diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index b74916de5183a..484d0873061ca 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -62,9 +62,6 @@ config HAVE_KVM_CPU_RELAX_INTERCEPT config KVM_VFIO bool -config HAVE_KVM_ARCH_TLB_FLUSH_ALL - bool - config HAVE_KVM_INVALID_WAKEUPS bool diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 70e5479797ac3..d6b0507861550 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -345,7 +345,6 @@ bool kvm_make_all_cpus_request(struct kvm *kvm, unsigned int req) } EXPORT_SYMBOL_GPL(kvm_make_all_cpus_request); -#ifndef CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL void kvm_flush_remote_tlbs(struct kvm *kvm) { ++kvm->stat.generic.remote_tlb_flush_requests; @@ -366,7 +365,6 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) ++kvm->stat.generic.remote_tlb_flush; } EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs); -#endif static void kvm_flush_shadow_all(struct kvm *kvm) { From patchwork Tue Aug 8 23:13:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13347188 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5645C41513 for ; Tue, 8 Aug 2023 23:13:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231964AbjHHXNx (ORCPT ); Tue, 8 Aug 2023 19:13:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39474 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231748AbjHHXNl (ORCPT ); Tue, 8 Aug 2023 19:13:41 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8D5BD19A1 for ; Tue, 8 Aug 2023 16:13:40 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5843fed1e88so77854467b3.0 for ; Tue, 08 Aug 2023 16:13:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691536420; x=1692141220; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EbIRM778vEtJ8vWvhEMC2KLHHOEdIHGrq5z82YRomuQ=; b=bKvtNHGSvuJp2zqnSdoMkiGVpjkiGLOxNjSdkIgER1Zdna1MWeHTf9G9gAIOW6bDNu ZJm9xaGVD14fHa3fYBaKXTeFMtYpHZo20pdswwAUkBImXBacCufsZFDb5raNvi4h+28W i3AOuKTe1lqg90upzZfwofEHk1NTsWkytPLN6wQ/qeOLTv8ZxQQLGXH6Q0qxQz1An7xM Ku6ZWNtvVot/oo9ROsoKcbt4dAQdiDq8etGt/0YX943OPvbH+t/iOzucdKC/SFHiFYb7 OTWFNaQW3q3Otg2LOg2Qd4esvihImxDOuOGWjRln2q12LolXg4KMvnUbhd4/HPRsTcfv Sh+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691536420; x=1692141220; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EbIRM778vEtJ8vWvhEMC2KLHHOEdIHGrq5z82YRomuQ=; b=azQZSSfL1mernqoYUKbiNllW4+KvBwujrB2zAj3yRPPv2mrWpOeBRYBrJONuoRsBh3 bQgO57lilcywvvkgzZQQY8VsVbHdocmdm9jMma/qPVdjw5VODrS5d0pY6pq3pOuk2GLA HW/b/A1aaNKhf2Rc03UqCemLbRptPF9HyCzVqSS6wjE9o3hs0VSDP4jBbXyq526+WSFG VtGl02CnkQHxkpK3QTKMYKzBsGXXEp593IPPq52yLVqYLmLu3Aa0SXIP6WYREq7FJeaA p0Bz3Sr1EK7L7P9hu4gjIK5cNAOgYKoyHRfAI569txKHWo9f68m6qQVSpda+irRxuDWo 7KrA== X-Gm-Message-State: AOJu0YyL8bgF9LwSd75ZNxvwEY9ptXzKkI7xlxx+tpjRRKaKN1BtWq+Q jpeQvGpn+TKHkW36t5DX+mrKokmCmtIP X-Google-Smtp-Source: AGHT+IHaShLlv9O1yBtKO3H2sibMGhr0bad7TgyyGlfrw9JZmZ9g7Xq+c+3UitlQEGZVwD9RTySQ7Yzi+Jp+ X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:860e:0:b0:d5d:511b:16da with SMTP id y14-20020a25860e000000b00d5d511b16damr19364ybk.2.1691536419844; Tue, 08 Aug 2023 16:13:39 -0700 (PDT) Date: Tue, 8 Aug 2023 23:13:21 +0000 In-Reply-To: <20230808231330.3855936-1-rananta@google.com> Mime-Version: 1.0 References: <20230808231330.3855936-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808231330.3855936-6-rananta@google.com> Subject: [PATCH v8 05/14] KVM: Allow range-based TLB invalidation from common code From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Gavin Shan , Shaoqin Huang Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: David Matlack Make kvm_flush_remote_tlbs_range() visible in common code and create a default implementation that just invalidates the whole TLB. This paves the way for several future features/cleanups: - Introduction of range-based TLBI on ARM. - Eliminating kvm_arch_flush_remote_tlbs_memslot() - Moving the KVM/x86 TDP MMU to common code. No functional change intended. Signed-off-by: David Matlack Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Gavin Shan Reviewed-by: Shaoqin Huang Reviewed-by: Anup Patel --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/mmu/mmu.c | 8 ++++---- arch/x86/kvm/mmu/mmu_internal.h | 3 --- include/linux/kvm_host.h | 12 ++++++++++++ virt/kvm/kvm_main.c | 13 +++++++++++++ 5 files changed, 31 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index a2d3cfc2eb75c..b547d17c58f63 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1804,6 +1804,8 @@ static inline int kvm_arch_flush_remote_tlbs(struct kvm *kvm) return -ENOTSUPP; } +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE + #define kvm_arch_pmi_in_guest(vcpu) \ ((vcpu) && (vcpu)->arch.handling_intr_from_guest) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ec169f5c7dce2..6adbe6c870982 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -278,16 +278,16 @@ static inline bool kvm_available_flush_remote_tlbs_range(void) return kvm_x86_ops.flush_remote_tlbs_range; } -void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, - gfn_t nr_pages) +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, + u64 nr_pages) { int ret = -EOPNOTSUPP; if (kvm_x86_ops.flush_remote_tlbs_range) ret = static_call(kvm_x86_flush_remote_tlbs_range)(kvm, start_gfn, nr_pages); - if (ret) - kvm_flush_remote_tlbs(kvm); + + return ret; } static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index d39af5639ce97..86cb83bb34804 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -170,9 +170,6 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, struct kvm_memory_slot *slot, u64 gfn, int min_level); -void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, - gfn_t nr_pages); - /* Flush the given page (huge or not) of guest memory. */ static inline void kvm_flush_remote_tlbs_gfn(struct kvm *kvm, gfn_t gfn, int level) { diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ade5d4500c2ce..f0be5d9c37dd1 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1359,6 +1359,7 @@ int kvm_vcpu_yield_to(struct kvm_vcpu *target); void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu, bool yield_to_kernel_mode); void kvm_flush_remote_tlbs(struct kvm *kvm); +void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages); #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min); @@ -1488,6 +1489,17 @@ static inline int kvm_arch_flush_remote_tlbs(struct kvm *kvm) int kvm_arch_flush_remote_tlbs(struct kvm *kvm); #endif +#ifndef __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE +static inline int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, + gfn_t gfn, u64 nr_pages) +{ + return -EOPNOTSUPP; +} +#else +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, + gfn_t gfn, u64 nr_pages); +#endif + #ifdef __KVM_HAVE_ARCH_NONCOHERENT_DMA void kvm_arch_register_noncoherent_dma(struct kvm *kvm); void kvm_arch_unregister_noncoherent_dma(struct kvm *kvm); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d6b0507861550..26e91000f579d 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -366,6 +366,19 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) } EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs); +void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages) +{ + if (!kvm_arch_flush_remote_tlbs_range(kvm, gfn, nr_pages)) + return; + + /* + * Fall back to a flushing entire TLBs if the architecture range-based + * TLB invalidation is unsupported or can't be performed for whatever + * reason. + */ + kvm_flush_remote_tlbs(kvm); +} + static void kvm_flush_shadow_all(struct kvm *kvm) { kvm_arch_flush_shadow_all(kvm); From patchwork Tue Aug 8 23:13:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13347187 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97514C001E0 for ; Tue, 8 Aug 2023 23:13:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231916AbjHHXNw (ORCPT ); Tue, 8 Aug 2023 19:13:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231819AbjHHXNn (ORCPT ); Tue, 8 Aug 2023 19:13:43 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D083019A8 for ; Tue, 8 Aug 2023 16:13:41 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d4cbf6b593bso4056311276.0 for ; Tue, 08 Aug 2023 16:13:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691536421; x=1692141221; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=yHU+6o+Vm1k1NWYWiWyWem2fmQyV4XQ6xuRLmKiGNf8=; b=KQHLl/wBSNRA2vhXHBzg8bM08NqU389dvR9szHi+BBjMUgBcGo7yargidiWQxHG0zc Xr7MyIYEScT8FDfdY/3QRbJtzmuwVISTmN2wlvSGX0ez6lX+xu3Wrw9r5lhba30GTw7r 7SO21kzpVYDu5qaNTl4UWyCcyl8IM2WjZyUVMBmtFvym/uhJaQHz/nM6S4NdmXOwsgIG fgZchH4r1y3Nw0avWoUwLf0sj8oEQIcgI01zLwmwfKppF2fbgAv3KoBbpqv2pg80kcwp 6JtscWcFjcU63atIHNcYT7rHlLSD58bRQS4+pP7DqqkroSLN0ODkG8aZh0ufeharmDma I1fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691536421; x=1692141221; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yHU+6o+Vm1k1NWYWiWyWem2fmQyV4XQ6xuRLmKiGNf8=; b=J+V0zCu0ajgtkXYZPHSYRHDA7sJ4lFdA6C+DxygDGwP0YOsl/gaUXasIjeGbb6PM9B B5IVv1FKfQ2Hhg2FsquVJrn7ZQGq2foBX9EqKajiaS4xbuU1DZsgYsOr1JF3gqXWrCbm f6kcakx6fQpsmgC1dKjql25rodcyKVHSefqErN47VMPZAkv0jJARnhRKzF5r0+7MPaXs s2xsPzwsk5jsbSY9Hr1y3WzMdoBEd6YLr7Mdhan6DtEZ6We7G6sWQEq8Yi5rTCUo1q7f U86G8BBdmaFP3hUd+IEvD3jViR+Jv5X3Cc8UTun8T4LcXD9llUYS2tKgaPZvKVqK9JFo oP7Q== X-Gm-Message-State: AOJu0Yzz8fcBqLabfh0vqJAuMBTe4KonReb8DFWG6FZOD249DMgwlluM Rb4ri41wEtjfSJePy/WZeRrEQsxygk4L X-Google-Smtp-Source: AGHT+IH1ypwd9IK/tbI7NOimP8PIH1O3Yx8r9iYoxo1btnzK9bGsdYSujiJfy5FeKH17dlhW0EdGc+WPDuOg X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:a304:0:b0:c6a:caf1:e601 with SMTP id d4-20020a25a304000000b00c6acaf1e601mr22268ybi.13.1691536421093; Tue, 08 Aug 2023 16:13:41 -0700 (PDT) Date: Tue, 8 Aug 2023 23:13:22 +0000 In-Reply-To: <20230808231330.3855936-1-rananta@google.com> Mime-Version: 1.0 References: <20230808231330.3855936-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808231330.3855936-7-rananta@google.com> Subject: [PATCH v8 06/14] KVM: Move kvm_arch_flush_remote_tlbs_memslot() to common code From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Gavin Shan , Shaoqin Huang Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: David Matlack Move kvm_arch_flush_remote_tlbs_memslot() to common code and drop "arch_" from the name. kvm_arch_flush_remote_tlbs_memslot() is just a range-based TLB invalidation where the range is defined by the memslot. Now that kvm_flush_remote_tlbs_range() can be called from common code we can just use that and drop a bunch of duplicate code from the arch directories. Note this adds a lockdep assertion for slots_lock being held when calling kvm_flush_remote_tlbs_memslot(), which was previously only asserted on x86. MIPS has calls to kvm_flush_remote_tlbs_memslot(), but they all hold the slots_lock, so the lockdep assertion continues to hold true. Also drop the CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT ifdef gating kvm_flush_remote_tlbs_memslot(), since it is no longer necessary. Signed-off-by: David Matlack Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Gavin Shan Reviewed-by: Shaoqin Huang Acked-by: Anup Patel --- arch/arm64/kvm/arm.c | 6 ------ arch/mips/kvm/mips.c | 10 ++-------- arch/riscv/kvm/mmu.c | 6 ------ arch/x86/kvm/mmu/mmu.c | 16 +--------------- arch/x86/kvm/x86.c | 2 +- include/linux/kvm_host.h | 7 +++---- virt/kvm/kvm_main.c | 18 ++++++++++++++++-- 7 files changed, 23 insertions(+), 42 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index c2c14059f6a8c..ed7bef4d970b9 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1525,12 +1525,6 @@ void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) } -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot) -{ - kvm_flush_remote_tlbs(kvm); -} - static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm, struct kvm_arm_device_addr *dev_addr) { diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index 4b7bc39a41736..231ac052b506b 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -199,7 +199,7 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm, /* Flush slot from GPA */ kvm_mips_flush_gpa_pt(kvm, slot->base_gfn, slot->base_gfn + slot->npages - 1); - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + kvm_flush_remote_tlbs_memslot(kvm, slot); spin_unlock(&kvm->mmu_lock); } @@ -235,7 +235,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, needs_flush = kvm_mips_mkclean_gpa_pt(kvm, new->base_gfn, new->base_gfn + new->npages - 1); if (needs_flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, new); + kvm_flush_remote_tlbs_memslot(kvm, new); spin_unlock(&kvm->mmu_lock); } } @@ -987,12 +987,6 @@ int kvm_arch_flush_remote_tlbs(struct kvm *kvm) return 1; } -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot) -{ - kvm_flush_remote_tlbs(kvm); -} - int kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { int r; diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index f2eb47925806b..97e129620686c 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -406,12 +406,6 @@ void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) { } -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot) -{ - kvm_flush_remote_tlbs(kvm); -} - void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free) { } diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6adbe6c870982..9e074b5f322de 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6670,7 +6670,7 @@ static void kvm_rmap_zap_collapsible_sptes(struct kvm *kvm, */ if (walk_slot_rmaps(kvm, slot, kvm_mmu_zap_collapsible_spte, PG_LEVEL_4K, KVM_MAX_HUGEPAGE_LEVEL - 1, true)) - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + kvm_flush_remote_tlbs_memslot(kvm, slot); } void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, @@ -6689,20 +6689,6 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, } } -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot) -{ - /* - * All current use cases for flushing the TLBs for a specific memslot - * related to dirty logging, and many do the TLB flush out of mmu_lock. - * The interaction between the various operations on memslot must be - * serialized by slots_locks to ensure the TLB flush from one operation - * is observed by any other operation on the same memslot. - */ - lockdep_assert_held(&kvm->slots_lock); - kvm_flush_remote_tlbs_range(kvm, memslot->base_gfn, memslot->npages); -} - void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, const struct kvm_memory_slot *memslot) { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index a6b9bea62fb8a..faeb2e307b36a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12751,7 +12751,7 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm, * See is_writable_pte() for more details (the case involving * access-tracked SPTEs is particularly relevant). */ - kvm_arch_flush_remote_tlbs_memslot(kvm, new); + kvm_flush_remote_tlbs_memslot(kvm, new); } } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index f0be5d9c37dd1..49292befa97fb 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1360,6 +1360,8 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu, bool yield_to_kernel_mode); void kvm_flush_remote_tlbs(struct kvm *kvm); void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages); +void kvm_flush_remote_tlbs_memslot(struct kvm *kvm, + const struct kvm_memory_slot *memslot); #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min); @@ -1388,10 +1390,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, unsigned long mask); void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot); -#ifdef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot); -#else /* !CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT */ +#ifndef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log); int kvm_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log, int *is_dirty, struct kvm_memory_slot **memslot); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 26e91000f579d..5d4d2e051aa09 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -379,6 +379,20 @@ void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages) kvm_flush_remote_tlbs(kvm); } +void kvm_flush_remote_tlbs_memslot(struct kvm *kvm, + const struct kvm_memory_slot *memslot) +{ + /* + * All current use cases for flushing the TLBs for a specific memslot + * are related to dirty logging, and many do the TLB flush out of + * mmu_lock. The interaction between the various operations on memslot + * must be serialized by slots_locks to ensure the TLB flush from one + * operation is observed by any other operation on the same memslot. + */ + lockdep_assert_held(&kvm->slots_lock); + kvm_flush_remote_tlbs_range(kvm, memslot->base_gfn, memslot->npages); +} + static void kvm_flush_shadow_all(struct kvm *kvm) { kvm_arch_flush_shadow_all(kvm); @@ -2191,7 +2205,7 @@ static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log) } if (flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, memslot); + kvm_flush_remote_tlbs_memslot(kvm, memslot); if (copy_to_user(log->dirty_bitmap, dirty_bitmap_buffer, n)) return -EFAULT; @@ -2308,7 +2322,7 @@ static int kvm_clear_dirty_log_protect(struct kvm *kvm, KVM_MMU_UNLOCK(kvm); if (flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, memslot); + kvm_flush_remote_tlbs_memslot(kvm, memslot); return 0; } From patchwork Tue Aug 8 23:13:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13347189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4705DC001DB for ; Tue, 8 Aug 2023 23:13:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231818AbjHHXNy (ORCPT ); Tue, 8 Aug 2023 19:13:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231865AbjHHXNw (ORCPT ); Tue, 8 Aug 2023 19:13:52 -0400 Received: from mail-oi1-x24a.google.com (mail-oi1-x24a.google.com [IPv6:2607:f8b0:4864:20::24a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14C2F19BC for ; Tue, 8 Aug 2023 16:13:43 -0700 (PDT) Received: by mail-oi1-x24a.google.com with SMTP id 5614622812f47-3a741f478f8so13458050b6e.3 for ; Tue, 08 Aug 2023 16:13:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691536422; x=1692141222; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=b8Ihb497LpuMIgM0TiiBODXNuX0ZiHa7Y+wfWyh7fKg=; b=VdwO9zpCylwoLk2VWROe5L67a/7tekWsn7A41eH6FI22rEu0/jHlzNkiGuD5QSU2r4 heeaTspfhcXGEZ4qFJxVIyrxjc0hHb5cQfJc1qxh7n1P3ML2VVzOESTMC81RcoH0f4F/ EjpyTbgU71zDoK+wd45wJvsGUqkEDjnyOMLm7DSdWRcgt9XoYCuiReiEWpwCuhW7OePA lULJsmxsVya0hLV68oD3RZTeA1GbtiDF/VJ4/TbciC0JjFABt8rg7VHPqKAkFCSX0oh8 Odw2TKiJRTdIeDkYgX0f+kV73C9Szk+ZM0HXTYYKkFU9XsAXh016PlcC9AGO1l0YzgPo jJWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691536422; x=1692141222; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=b8Ihb497LpuMIgM0TiiBODXNuX0ZiHa7Y+wfWyh7fKg=; b=jFS2L2MDRbfiVMsmlzykTHagxSJ+m31KtGgDRBTkpyNRnacl6dMdAG2nFB6lAc/Max QCsyBoTRgrxigvfYqJQiKE6gJWjqKfdrrZr9ADct0jAQKUQQq47sJ8KAzfVNSULFOB7W EfbW7tnnnAPhSYQY/1z6lyGAD4rCEItKsBjsPci5aKWTGyt2KMPnGHxjrFheoaBHi7UE dPBtl5ZWf40RSVghLPoAPN85tn3UdIZFQfQw7w2lXJJt+arTIAaoVg7V6SMIlpcIePwB uue35NSn5rVbMK79XgR/u750ZgAsakw0gRpQXduI5tsUkUT4zgq+IVh64h4MJY/0+4ey n5lw== X-Gm-Message-State: AOJu0YwP8xhZIndTplOJ+xg9EPLntIPxdaCz1rVA4VVXNpBZ+QCGGWCF gNSHjDqHDM650Y4q22N9IKkOvVUcVznO X-Google-Smtp-Source: AGHT+IFatEcJgOqiwqn8y/S4x8uvX47JhME+BD2W6lk/32L881wNtvTRbj8PXPAQF5kkjTPZiBmutkPkEAYv X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6808:1307:b0:3a7:aef2:8b1e with SMTP id y7-20020a056808130700b003a7aef28b1emr601739oiv.10.1691536422468; Tue, 08 Aug 2023 16:13:42 -0700 (PDT) Date: Tue, 8 Aug 2023 23:13:23 +0000 In-Reply-To: <20230808231330.3855936-1-rananta@google.com> Mime-Version: 1.0 References: <20230808231330.3855936-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808231330.3855936-8-rananta@google.com> Subject: [PATCH v8 07/14] arm64: tlb: Refactor the core flush algorithm of __flush_tlb_range From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Catalin Marinas , Gavin Shan , Shaoqin Huang Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org Currently, the core TLB flush functionality of __flush_tlb_range() hardcodes vae1is (and variants) for the flush operation. In the upcoming patches, the KVM code reuses this core algorithm with ipas2e1is for range based TLB invalidations based on the IPA. Hence, extract the core flush functionality of __flush_tlb_range() into its own macro that accepts an 'op' argument to pass any TLBI operation, such that other callers (KVM) can benefit. No functional changes intended. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Catalin Marinas Reviewed-by: Gavin Shan Reviewed-by: Shaoqin Huang --- arch/arm64/include/asm/tlbflush.h | 121 +++++++++++++++++------------- 1 file changed, 68 insertions(+), 53 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 412a3b9a3c25d..b9475a852d5be 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -278,14 +278,74 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, */ #define MAX_TLBI_OPS PTRS_PER_PTE +/* + * __flush_tlb_range_op - Perform TLBI operation upon a range + * + * @op: TLBI instruction that operates on a range (has 'r' prefix) + * @start: The start address of the range + * @pages: Range as the number of pages from 'start' + * @stride: Flush granularity + * @asid: The ASID of the task (0 for IPA instructions) + * @tlb_level: Translation Table level hint, if known + * @tlbi_user: If 'true', call an additional __tlbi_user() + * (typically for user ASIDs). 'flase' for IPA instructions + * + * When the CPU does not support TLB range operations, flush the TLB + * entries one by one at the granularity of 'stride'. If the TLB + * range ops are supported, then: + * + * 1. If 'pages' is odd, flush the first page through non-range + * operations; + * + * 2. For remaining pages: the minimum range granularity is decided + * by 'scale', so multiple range TLBI operations may be required. + * Start from scale = 0, flush the corresponding number of pages + * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it + * until no pages left. + * + * Note that certain ranges can be represented by either num = 31 and + * scale or num = 0 and scale + 1. The loop below favours the latter + * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. + */ +#define __flush_tlb_range_op(op, start, pages, stride, \ + asid, tlb_level, tlbi_user) \ +do { \ + int num = 0; \ + int scale = 0; \ + unsigned long addr; \ + \ + while (pages > 0) { \ + if (!system_supports_tlb_range() || \ + pages % 2 == 1) { \ + addr = __TLBI_VADDR(start, asid); \ + __tlbi_level(op, addr, tlb_level); \ + if (tlbi_user) \ + __tlbi_user_level(op, addr, tlb_level); \ + start += stride; \ + pages -= stride >> PAGE_SHIFT; \ + continue; \ + } \ + \ + num = __TLBI_RANGE_NUM(pages, scale); \ + if (num >= 0) { \ + addr = __TLBI_VADDR_RANGE(start, asid, scale, \ + num, tlb_level); \ + __tlbi(r##op, addr); \ + if (tlbi_user) \ + __tlbi_user(r##op, addr); \ + start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \ + pages -= __TLBI_RANGE_PAGES(num, scale); \ + } \ + scale++; \ + } \ +} while (0) + static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, unsigned long stride, bool last_level, int tlb_level) { - int num = 0; - int scale = 0; - unsigned long asid, addr, pages; + unsigned long asid, pages; start = round_down(start, stride); end = round_up(end, stride); @@ -307,56 +367,11 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, dsb(ishst); asid = ASID(vma->vm_mm); - /* - * When the CPU does not support TLB range operations, flush the TLB - * entries one by one at the granularity of 'stride'. If the TLB - * range ops are supported, then: - * - * 1. If 'pages' is odd, flush the first page through non-range - * operations; - * - * 2. For remaining pages: the minimum range granularity is decided - * by 'scale', so multiple range TLBI operations may be required. - * Start from scale = 0, flush the corresponding number of pages - * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it - * until no pages left. - * - * Note that certain ranges can be represented by either num = 31 and - * scale or num = 0 and scale + 1. The loop below favours the latter - * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. - */ - while (pages > 0) { - if (!system_supports_tlb_range() || - pages % 2 == 1) { - addr = __TLBI_VADDR(start, asid); - if (last_level) { - __tlbi_level(vale1is, addr, tlb_level); - __tlbi_user_level(vale1is, addr, tlb_level); - } else { - __tlbi_level(vae1is, addr, tlb_level); - __tlbi_user_level(vae1is, addr, tlb_level); - } - start += stride; - pages -= stride >> PAGE_SHIFT; - continue; - } - - num = __TLBI_RANGE_NUM(pages, scale); - if (num >= 0) { - addr = __TLBI_VADDR_RANGE(start, asid, scale, - num, tlb_level); - if (last_level) { - __tlbi(rvale1is, addr); - __tlbi_user(rvale1is, addr); - } else { - __tlbi(rvae1is, addr); - __tlbi_user(rvae1is, addr); - } - start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; - pages -= __TLBI_RANGE_PAGES(num, scale); - } - scale++; - } + if (last_level) + __flush_tlb_range_op(vale1is, start, pages, stride, asid, tlb_level, true); + else + __flush_tlb_range_op(vae1is, start, pages, stride, asid, tlb_level, true); + dsb(ish); } From patchwork Tue Aug 8 23:13:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13347190 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7748EC04FE1 for ; Tue, 8 Aug 2023 23:13:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231865AbjHHXN4 (ORCPT ); Tue, 8 Aug 2023 19:13:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34358 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231888AbjHHXNw (ORCPT ); Tue, 8 Aug 2023 19:13:52 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78A351BCF for ; Tue, 8 Aug 2023 16:13:44 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d10792c7582so5344523276.3 for ; Tue, 08 Aug 2023 16:13:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691536423; x=1692141223; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8aS9+bqGGIGJd4osXg0UNYpwpvIxhiWvvz+QMPQ0Ulo=; b=r5BfuQMefAg0Tnr2FQaEIivjC4JzNAMuw4qxGVtci6+U9djSIpA11Hdl+rJc6en91J y/h+YDs0QAA5xFNhoGQE0fza9XyovVTAf2+ZIQj9pZdiciK8WN7z5vHDkRS5bze2JrvX 1Pi0zGoodsu607T4DyfgYeotzEFP9OFg33WWk/J6RT4OzDi2b1Fi3JqRQ6HpyZZ+gywn +UZCzVUhZNvWuqdnzNxXM4bRsJeE1eSS8tnxOTUMx2/t6MN7k2idS2QWNmkVWa/w66NG yQ/AVA1aDJm/DL0O+01cjluCupsA1erg2PPaJFd5MY5BuRKvpf3vdiz+yo1bLlWRTbiY xCRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691536423; x=1692141223; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8aS9+bqGGIGJd4osXg0UNYpwpvIxhiWvvz+QMPQ0Ulo=; b=QF2UXIdBztjEXBvDKktGWPlNxlcGaZL856KIoEEEWVFLylwEMhDZIq4PObqLI6cGbC 3xbJtUrfnfAsfCa6UzpiTTj9uibDj5NzMr+DQIJmCgziD93vESUTf8vdhnngppiltOT0 /t6gPWfe+n6TtyTKGIOthIOM62rOTHwtrtegy3s1a4Di51Jld+5um6cRKbDjWkrEPUwe kCw0nG5aNQIv0CahPy+kJeWdeEdEl+IL1MyMUrT1B0cqG/iAmhjNDArkrcMA5VqGvLBs AxGOz7EvzaQGRR3JUG7ufiixEkCepBVFyvSX0DniQH9R41YF04yH8nYXAju/6TD50QAf TfeA== X-Gm-Message-State: AOJu0YyDBvpTQZiw/Qe+TGhc22Z/LejXaACjUG8m4pYPSHc+xuG1cuu6 9Dy/qjCqjXjBsC/ZdL3YPjPIIduAlXiV X-Google-Smtp-Source: AGHT+IGN6Wyp+xgEmWlEdpiyTqEz05f+ph0g541H8xJPxC5yzYpq7Xwg2KskI5Za7Uo8sgiAN3njNrOwDHIy X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a5b:890:0:b0:d10:dd00:385 with SMTP id e16-20020a5b0890000000b00d10dd000385mr25024ybq.0.1691536423557; Tue, 08 Aug 2023 16:13:43 -0700 (PDT) Date: Tue, 8 Aug 2023 23:13:24 +0000 In-Reply-To: <20230808231330.3855936-1-rananta@google.com> Mime-Version: 1.0 References: <20230808231330.3855936-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808231330.3855936-9-rananta@google.com> Subject: [PATCH v8 08/14] arm64: tlb: Implement __flush_s2_tlb_range_op() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org Define __flush_s2_tlb_range_op(), as a wrapper over __flush_tlb_range_op(), for stage-2 specific range-based TLBI operations that doesn't necessarily have to deal with 'asid' and 'tlbi_user' arguments. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Shaoqin Huang --- arch/arm64/include/asm/tlbflush.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index b9475a852d5be..93f4b397f9a12 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -340,6 +340,9 @@ do { \ } \ } while (0) +#define __flush_s2_tlb_range_op(op, start, pages, stride, tlb_level) \ + __flush_tlb_range_op(op, start, pages, stride, 0, tlb_level, false) + static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, unsigned long stride, bool last_level, From patchwork Tue Aug 8 23:13:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13347191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE33CC04E69 for ; Tue, 8 Aug 2023 23:13:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231888AbjHHXN5 (ORCPT ); Tue, 8 Aug 2023 19:13:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34476 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231859AbjHHXNw (ORCPT ); Tue, 8 Aug 2023 19:13:52 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 751F31BE5 for ; Tue, 8 Aug 2023 16:13:45 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-56942667393so77744767b3.2 for ; Tue, 08 Aug 2023 16:13:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691536424; x=1692141224; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=bI2/Ai2EKT2A2IhEWIOyLZxp6nDkYiv4aO92c97jYBA=; b=hOQ/QNJ4fDMZ2FC3pGaUDXkd1GQHOg5TRJBn6sYxz/OXZpP+kRlssN8bL3vT8Dq/Oz PJeWi4rKvEE795D0Mdr0NYRNJ8XYctBzLKe81Zn50caX0GD9ToYpOsrw7CBlOvi0aB4Z DrEbTqOG+c5oyjUIV6w4gfSf6i/e1hy12lLpXNAvyEkLpASj2E4Xt/qsGisnVZnqiTo3 qK1+DZmlYg9+ohhyaVcnvxJ4JRNK9eaW8bIvDgE9COakWSV+KTWSt0KHykR8gI4BH9hz jOxvzwcesjF0cVffspEemGYKzaHNuAMxpTjjLBR8VbYHzDfOJdS466JXlh2I7vC/czlh zvDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691536424; x=1692141224; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bI2/Ai2EKT2A2IhEWIOyLZxp6nDkYiv4aO92c97jYBA=; b=IyC/pha1ZUACGrEQSnmJtyWIfk77ipPhw42cA8QtiwlPnCxNpq9Y4733iehdPdQjGx UP86qfq/4cCs+jIrVOwID+y3S9fepRE4v1NMgqDeMtyjsE+TnEoMThCS3fSrwadlaVth zbAiKiaNgBksJ85pIBxVZJ+/8hJvVs/DBi7+Xyaz7nmMet4laECB8yYOYjUzQR4LgtN8 v1lm+gGqxX4d8GvgmLZO6AlzaKdF9GdSHBLVhWbHDme90zl4Sc9ZpSDn3YU9P0U3t6B0 Xiqcxmbb19pSJZh3gN3ovvjiqdKvOMFK09dSANWp1rzPxKtZ6wtPjzDyPMwG+7kgzhtF v30A== X-Gm-Message-State: AOJu0YxsS8HCG6NLgkpz/fsMxkLBugXhX8PF1NqDE/gSl94an+nBU57e wDd005q/CxKzYzbbEJa00F/AFF5tUVfp X-Google-Smtp-Source: AGHT+IGGHYwkr7nrnrafn7gr98QPyZhjySK4/ETqW8spkcqbMy1k4MjlMxIWPFgxTCp7whfgGZUxiSmogUJy X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:ab62:0:b0:d4c:f456:d54f with SMTP id u89-20020a25ab62000000b00d4cf456d54fmr18964ybi.8.1691536424596; Tue, 08 Aug 2023 16:13:44 -0700 (PDT) Date: Tue, 8 Aug 2023 23:13:25 +0000 In-Reply-To: <20230808231330.3855936-1-rananta@google.com> Mime-Version: 1.0 References: <20230808231330.3855936-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808231330.3855936-10-rananta@google.com> Subject: [PATCH v8 09/14] KVM: arm64: Implement __kvm_tlb_flush_vmid_range() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Gavin Shan Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org Define __kvm_tlb_flush_vmid_range() (for VHE and nVHE) to flush a range of stage-2 page-tables using IPA in one go. If the system supports FEAT_TLBIRANGE, the following patches would conviniently replace global TLBI such as vmalls12e1is in the map, unmap, and dirty-logging paths with ripas2e1is instead. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Gavin Shan Reviewed-by: Shaoqin Huang --- arch/arm64/include/asm/kvm_asm.h | 3 +++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 11 +++++++++++ arch/arm64/kvm/hyp/nvhe/tlb.c | 30 ++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/vhe/tlb.c | 28 ++++++++++++++++++++++++++++ 4 files changed, 72 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 7d170aaa2db41..2c27cb8cf442d 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -70,6 +70,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa, __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa_nsh, __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid, + __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_range, __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context, __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff, __KVM_HOST_SMCCC_FUNC___vgic_v3_read_vmcr, @@ -229,6 +230,8 @@ extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa, extern void __kvm_tlb_flush_vmid_ipa_nsh(struct kvm_s2_mmu *mmu, phys_addr_t ipa, int level); +extern void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, unsigned long pages); extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu); extern void __kvm_timer_set_cntvoff(u64 cntvoff); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index a169c619db60b..857d9bc04fd48 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -135,6 +135,16 @@ static void handle___kvm_tlb_flush_vmid_ipa_nsh(struct kvm_cpu_context *host_ctx __kvm_tlb_flush_vmid_ipa_nsh(kern_hyp_va(mmu), ipa, level); } +static void +handle___kvm_tlb_flush_vmid_range(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); + DECLARE_REG(phys_addr_t, start, host_ctxt, 2); + DECLARE_REG(unsigned long, pages, host_ctxt, 3); + + __kvm_tlb_flush_vmid_range(kern_hyp_va(mmu), start, pages); +} + static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); @@ -327,6 +337,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa), HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa_nsh), HANDLE_FUNC(__kvm_tlb_flush_vmid), + HANDLE_FUNC(__kvm_tlb_flush_vmid_range), HANDLE_FUNC(__kvm_flush_cpu_context), HANDLE_FUNC(__kvm_timer_set_cntvoff), HANDLE_FUNC(__vgic_v3_read_vmcr), diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c index b9991bbd8e3fd..1b265713d6bed 100644 --- a/arch/arm64/kvm/hyp/nvhe/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -182,6 +182,36 @@ void __kvm_tlb_flush_vmid_ipa_nsh(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } +void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, unsigned long pages) +{ + struct tlb_inv_context cxt; + unsigned long stride; + + /* + * Since the range of addresses may not be mapped at + * the same level, assume the worst case as PAGE_SIZE + */ + stride = PAGE_SIZE; + start = round_down(start, stride); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt, false); + + __flush_s2_tlb_range_op(ipas2e1is, start, pages, stride, 0); + + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + /* See the comment in __kvm_tlb_flush_vmid_ipa() */ + if (icache_is_vpipt()) + icache_inval_all_pou(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt; diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c index e69da550cdc5b..46bd43f61d76f 100644 --- a/arch/arm64/kvm/hyp/vhe/tlb.c +++ b/arch/arm64/kvm/hyp/vhe/tlb.c @@ -143,6 +143,34 @@ void __kvm_tlb_flush_vmid_ipa_nsh(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } +void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, unsigned long pages) +{ + struct tlb_inv_context cxt; + unsigned long stride; + + /* + * Since the range of addresses may not be mapped at + * the same level, assume the worst case as PAGE_SIZE + */ + stride = PAGE_SIZE; + start = round_down(start, stride); + + dsb(ishst); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt); + + __flush_s2_tlb_range_op(ipas2e1is, start, pages, stride, 0); + + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt; From patchwork Tue Aug 8 23:13:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13347192 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76766C07E8A for ; Tue, 8 Aug 2023 23:14:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231866AbjHHXN6 (ORCPT ); Tue, 8 Aug 2023 19:13:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34358 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231925AbjHHXNw (ORCPT ); Tue, 8 Aug 2023 19:13:52 -0400 Received: from mail-ot1-x34a.google.com (mail-ot1-x34a.google.com [IPv6:2607:f8b0:4864:20::34a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E4251BFA for ; Tue, 8 Aug 2023 16:13:46 -0700 (PDT) Received: by mail-ot1-x34a.google.com with SMTP id 46e09a7af769-6bcf63ad9a8so5479406a34.3 for ; Tue, 08 Aug 2023 16:13:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691536425; x=1692141225; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CwKTqZRHBF6wuN1C9cAW2HapO35gWwZnaXscRj6lwnc=; b=2WhdyxPqAZMQPm2b5LDRFghWynjBi1UCopSK1ma1kbSftgbyoXHvV4F9IDWxcU9bjD x/E9HP43OeHzZX28ncf+Kk6FnjBLF/r9fVsQhiqEHw1zjRX24sDt0GrY2/p3GalF1S2v N5lNsSn9Na5ATp2cklHDX8LTBtETR2s+9HHST3Ohuu7lmur160dOSvrfZ/KSaEE1NZwE MTqoVPht1r4GCjkIudPtbkdQsI5Nu98lnQML5VoXE8em5TrezcFzlkSPwtr/1nuEkMDe WUXQqHSd/d2erJlQAn4TX7ybCyK2/vQdT6g7QkZRS1d+VIMG2dqDkc81KuQJiZISQlHN UINA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691536425; x=1692141225; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CwKTqZRHBF6wuN1C9cAW2HapO35gWwZnaXscRj6lwnc=; b=aTPzq6ZMxLLbLEIFO9NHtRO7FHjanskMl7LSCfyMRnvNGf6xMW7oEzIsEYvkgmiAX2 q8/2erLijrMuyP0ozOkAn9BGOOYLzlbN+hg1xsXUhil70Rw7AsaBNBG2g12z2NX3P3Me T2hBM9AJ2P4Lmrp2un5lVe3ZyWj1Z/ihojtrgmX6DKh0P/Y/GnRjtfDFMPX0z66buIBu b7fdlyAFTjI5L0lqT0+3BRZpG7Oh43yubmLC9I8vuFJnUyXkrk7P8Hn2KCaq0nL5q+Lu 3IqLRd172SPy57jfD4E8gStMLXW7yLJVRYtsnHjdT4qBjiVYU8yrOItx+HJ5mEZsECwE aAIw== X-Gm-Message-State: AOJu0YzlYBNgW8ZGI6lvA5yW6CIUtRIzD1kdjnmvVUZ/Fm2HhGvqFrnL pl+lkmvWiyKtchdh/kqTxKi0JXGbcJP9 X-Google-Smtp-Source: AGHT+IF6ugFOVb6y8tlgF6YsRwtDTq1sT4bykRxLroalTTYHrfBZlxunAEzdOs30Q1iqHJ3u3x2Z2p1PQHN7 X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6870:8c0f:b0:1bb:ad9e:2978 with SMTP id ec15-20020a0568708c0f00b001bbad9e2978mr334671oab.10.1691536425651; Tue, 08 Aug 2023 16:13:45 -0700 (PDT) Date: Tue, 8 Aug 2023 23:13:26 +0000 In-Reply-To: <20230808231330.3855936-1-rananta@google.com> Mime-Version: 1.0 References: <20230808231330.3855936-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808231330.3855936-11-rananta@google.com> Subject: [PATCH v8 10/14] KVM: arm64: Define kvm_tlb_flush_vmid_range() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Gavin Shan , Shaoqin Huang Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org Implement the helper kvm_tlb_flush_vmid_range() that acts as a wrapper for range-based TLB invalidations. For the given VMID, use the range-based TLBI instructions to do the job or fallback to invalidating all the TLB entries. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Gavin Shan Reviewed-by: Shaoqin Huang --- arch/arm64/include/asm/kvm_pgtable.h | 10 ++++++++++ arch/arm64/kvm/hyp/pgtable.c | 20 ++++++++++++++++++++ 2 files changed, 30 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 8294a9a7e566d..5e8b1ff07854b 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -754,4 +754,14 @@ enum kvm_pgtable_prot kvm_pgtable_stage2_pte_prot(kvm_pte_t pte); * kvm_pgtable_prot format. */ enum kvm_pgtable_prot kvm_pgtable_hyp_pte_prot(kvm_pte_t pte); + +/** + * kvm_tlb_flush_vmid_range() - Invalidate/flush a range of TLB entries + * + * @mmu: Stage-2 KVM MMU struct + * @addr: The base Intermediate physical address from which to invalidate + * @size: Size of the range from the base to invalidate + */ +void kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t addr, size_t size); #endif /* __ARM64_KVM_PGTABLE_H__ */ diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index aa740a974e024..5d14d5d5819a1 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -670,6 +670,26 @@ static bool stage2_has_fwb(struct kvm_pgtable *pgt) return !(pgt->flags & KVM_PGTABLE_S2_NOFWB); } +void kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t addr, size_t size) +{ + unsigned long pages, inval_pages; + + if (!system_supports_tlb_range()) { + kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); + return; + } + + pages = size >> PAGE_SHIFT; + while (pages > 0) { + inval_pages = min(pages, MAX_TLBI_RANGE_PAGES); + kvm_call_hyp(__kvm_tlb_flush_vmid_range, mmu, addr, inval_pages); + + addr += inval_pages << PAGE_SHIFT; + pages -= inval_pages; + } +} + #define KVM_S2_MEMATTR(pgt, attr) PAGE_S2_MEMATTR(attr, stage2_has_fwb(pgt)) static int stage2_set_prot_attr(struct kvm_pgtable *pgt, enum kvm_pgtable_prot prot, From patchwork Tue Aug 8 23:13:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13347193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0EC8C001DB for ; Tue, 8 Aug 2023 23:14:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231868AbjHHXOH (ORCPT ); Tue, 8 Aug 2023 19:14:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34364 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231959AbjHHXNx (ORCPT ); Tue, 8 Aug 2023 19:13:53 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 287571FC6 for ; Tue, 8 Aug 2023 16:13:47 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d11f35a0d5cso7292666276.1 for ; Tue, 08 Aug 2023 16:13:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691536426; x=1692141226; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Py+nSHfjFpCHBtL754s2ZDTBxkx7EXVyuksTqbxeHnc=; b=Oobg/4/cdAOs54dQe4pnW/zbnaNBnlVLOanu5KZLTomKfQ+83/vesNasPXZkLytB/U yYuKOE66BlODpHN9VhrFvp9Li0ev2MyXeTQUiPUTJXSqU/0hjP6ffhaAu0DG8ceB9nRE 7OEcsCUnawXNgnypHlVh6Ec+JBvBdbe6XhVEonXZ+4qbC9GAWgGFzjJ3qRCswmvnVSma K5WwlTP0idKzpgOBSRweJG4IFK7UjuzmMKGp5q7vQRUWRszUZlz518kqL6rhealjSgNW mURZVtnykI4uxjQF/p6Pbnppkymu8J5PbIwCL6iiQiL/zSW8l5u4ktomsh5R1SKCNTJU 7Yaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691536426; x=1692141226; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Py+nSHfjFpCHBtL754s2ZDTBxkx7EXVyuksTqbxeHnc=; b=Nr2Yo4nVDz0GnuiBg/FcDMinuL1ecYBc8pcYzoOtx+ah0tVzYEQbFK0i2gEWB+Aqky 33L4TgHzAs48K+JdUGvFYAqCZTs9qA+oOEstb/9kmWC6Hi4DjeET1czTv6VJh8STx5jg BPrBTCoTCT1ERP58B3v5ogkdlersC+qcURGpXSW8OnBO0R1rkLqEut/ha0gBSxMofW6+ WnuYLrev/+IX9tvgwFX2TjxeGXZ6YSg4z1sVoAeaY8OleKyELitogkQyxK3Pjh5W+1bR usIsg75H9zBXiJb2tTUPLIDb1dNNwkr8xrkQ/I10xcPny+BIEUET8pq/fFNUnXkWPJxY CSWg== X-Gm-Message-State: AOJu0YzhCqlpNtGG6ApzlVqdQgg2KP1WFBkM7loo50pKt76wC4giKWQp ijBb3eIFRJyRPo1GmWvuoH6L0Ne1QCNK X-Google-Smtp-Source: AGHT+IFrEavsUv9CvI5rpWtoTRkTdNCW4hRKDt4k4H2kyQ7TTHO7qYga0LwAZMMkm4LGrdXgqeyzUr1J01Xw X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6902:1788:b0:d45:1b81:1154 with SMTP id ca8-20020a056902178800b00d451b811154mr21434ybb.2.1691536426494; Tue, 08 Aug 2023 16:13:46 -0700 (PDT) Date: Tue, 8 Aug 2023 23:13:27 +0000 In-Reply-To: <20230808231330.3855936-1-rananta@google.com> Mime-Version: 1.0 References: <20230808231330.3855936-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808231330.3855936-12-rananta@google.com> Subject: [PATCH v8 11/14] KVM: arm64: Implement kvm_arch_flush_remote_tlbs_range() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Gavin Shan , Shaoqin Huang Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org Implement kvm_arch_flush_remote_tlbs_range() for arm64 to invalidate the given range in the TLB. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Gavin Shan Reviewed-by: Shaoqin Huang --- arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/kvm/mmu.c | 8 ++++++++ 2 files changed, 10 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 20f2ba149c70c..8f2d99eaab036 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1113,6 +1113,8 @@ struct kvm *kvm_arch_alloc_vm(void); #define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE + static inline bool kvm_vm_is_protected(struct kvm *kvm) { return false; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 0ac721fa27f18..294078ce16349 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -172,6 +172,14 @@ int kvm_arch_flush_remote_tlbs(struct kvm *kvm) return 0; } +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, + gfn_t gfn, u64 nr_pages) +{ + kvm_tlb_flush_vmid_range(&kvm->arch.mmu, + start_gfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT); + return 0; +} + static bool kvm_is_device_pfn(unsigned long pfn) { return !pfn_is_map_memory(pfn); From patchwork Tue Aug 8 23:13:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13347194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8985C001E0 for ; Tue, 8 Aug 2023 23:14:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231923AbjHHXO2 (ORCPT ); Tue, 8 Aug 2023 19:14:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34664 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232046AbjHHXN4 (ORCPT ); Tue, 8 Aug 2023 19:13:56 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 732331FEE for ; Tue, 8 Aug 2023 16:13:48 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d4ca2881833so3970487276.3 for ; Tue, 08 Aug 2023 16:13:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691536427; x=1692141227; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DdFQXc9QjMOj4pMVD7mddwddfJwM0E/FVNI1wIxPkhk=; b=GGya4r79Fypy7MpijHUr6lRrKbN2KwgnqmepmFHyoEm6JOw0RI4BnR26D52ypMST14 qMrhS6uMP+X/P+04iY+CFJYP/yfNgW/oQiRxmX4L8KUMTb9SHE2leuL53nL+lEy8UfWN H6Gme/iQnweVgHOIz203N5vFFXqVCtC2ViqgYkIHf1XrsCct05bcsmV1KcRfn0mKeymj ESMAAwstxX9ujJZpbixig2EzSNSrlwlq3HWLXEyApK1JC0D3VIjPeG/oYEmpo68kYHS8 /95jhR/N+ADW8Wt9ABI0Q1X51ZWQY4WgE96INI5uDk8QgBXuW/N36eA7E8a9TtSBew8R 8+sg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691536427; x=1692141227; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DdFQXc9QjMOj4pMVD7mddwddfJwM0E/FVNI1wIxPkhk=; b=JPZ8z7OVgkrgPzI512B9bj4Fq9wGINSyPEOB1Zox/f5K/3+8sQrqN6f5opIfqK0DM2 5yX7GSnp2Swm3+s5MvzSoNi5bO+dGX/A3zBThsTMQXMDE+7U/1wjCwPftvt/dsk9dH5U V4Gt+Ba+9MF2mJLwQSGglVJbTl8YFWRC+mMMu8GNZ2zOVkGOq09vbdNmLgKD3W9k3w7H 501lHPvv67JLtbAOZ2JXeftdXJ5oWxqDzmxzHVEjvbWzmBw60p8P+AgZU4jQ3tHWy8TB 5V0Qx5ilRQTVfHhjE0XE8CHmlXKmTZ78gcLjubCMtdYZTTltwTWENmyIJtAu5nYWwwvT hN1A== X-Gm-Message-State: AOJu0YyonoHGORbSyqaMktcYvv1xxye0FEOMWiMEgppU7/OlY280zIjr hxYG4SJlQS5m1Foao3zRsCOPfiIYJ8yl X-Google-Smtp-Source: AGHT+IHtVpA0B67cdmlCLqrTRdBRqOLFjvFy7tyIxoiuO5XwsAt9fDPTNialdcxuGgrd+oEKUp81U6/MqqgQ X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:ac09:0:b0:d0c:1f08:5fef with SMTP id w9-20020a25ac09000000b00d0c1f085fefmr19514ybi.12.1691536427532; Tue, 08 Aug 2023 16:13:47 -0700 (PDT) Date: Tue, 8 Aug 2023 23:13:28 +0000 In-Reply-To: <20230808231330.3855936-1-rananta@google.com> Mime-Version: 1.0 References: <20230808231330.3855936-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808231330.3855936-13-rananta@google.com> Subject: [PATCH v8 12/14] KVM: arm64: Flush only the memslot after write-protect From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Gavin Shan , Shaoqin Huang Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org After write-protecting the region, currently KVM invalidates the entire TLB entries using kvm_flush_remote_tlbs(). Instead, scope the invalidation only to the targeted memslot. If supported, the architecture would use the range-based TLBI instructions to flush the memslot or else fallback to flushing all of the TLBs. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Gavin Shan Reviewed-by: Shaoqin Huang --- arch/arm64/kvm/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 294078ce16349..95ca2b86aa2cd 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1083,7 +1083,7 @@ static void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot) write_lock(&kvm->mmu_lock); stage2_wp_range(&kvm->arch.mmu, start, end); write_unlock(&kvm->mmu_lock); - kvm_flush_remote_tlbs(kvm); + kvm_flush_remote_tlbs_memslot(kvm, memslot); } /** From patchwork Tue Aug 8 23:13:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13347195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF92DC001DB for ; Tue, 8 Aug 2023 23:14:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232311AbjHHXOb (ORCPT ); Tue, 8 Aug 2023 19:14:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34368 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231926AbjHHXOR (ORCPT ); Tue, 8 Aug 2023 19:14:17 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D8212109 for ; Tue, 8 Aug 2023 16:13:49 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d5e792a163dso449724276.1 for ; Tue, 08 Aug 2023 16:13:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691536428; x=1692141228; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ceh3/nJpRP3LCUTMqceYFYfxrY0H2MNGDIBSz6bT86E=; b=lYWCq7XVM6FudJxW5r6RH/4QIo159OCh6BgBZTBQDRkAKhfeKsfGNQQCUQnkN8KvPr t2l9cZJpTHvXHX31KaawaEBmdl57OidB6o+0jjp9TDmM0VLLVpgNjiCEG1/+aYqFU6hC eAp6ynp7sqYoyVQ64la2au+GLVmINvRuVrGSHYiSeGeSQjAXxRWyTBZdKnV0DHkwK074 YgL4nZU6PyCxpLCA6hze9IY8+wLAXkT7Chl05pETXiSkaPo3evGfyJZWaUFGflARAI1I CWJ4pe0DcCV/6ZrRmXGPD36mbRDc3h8lapnceaa/GYhujd1Sc5T65R444ON8TKY8zetm jTqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691536428; x=1692141228; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ceh3/nJpRP3LCUTMqceYFYfxrY0H2MNGDIBSz6bT86E=; b=NHzwBpp7yPb06Flkt68ndbLYZbadLtx+JggUa/hyuOZktXd+rCGm3ydTvkDjcaWxJ6 dJAjT9EHNdtJyhZHQZQ+bWi3T8jtK29t9Ny6Ku1vbjZep+AsJ9kV8inuWKD9hmfZAaOm yswcRUSKP3UZf5J8FRfDe7g0L7HtBvP5elhZhOLvpc5BiHFjXmmeIXR12XDaETg8OOhp lAtzqIUQwU+d7uCDr36Ae70CIwdicFYJmCYro6j07FwJCO5PdOeYi6GCrXdhKu5gf9yV i9emdreWEZM6qkqt8b+fOAIL2b2UqgTm1eZTX+hV8UcIJsEFPXpWdDmWgOblPeLwjsEu vvLw== X-Gm-Message-State: AOJu0YzKZbVfSYsJd8j374YtoxexaJV9Z+QvmhrrEFy5rLvkwFc4+Tey PXKuMcWUpHbezhwH0hvfQ9DjvAizbhsX X-Google-Smtp-Source: AGHT+IFZf7KdCrM3mKKctDL4eJEFrZL/iyXXoQ3odpF+ERPp4XkakMR9xXlj55lUKXM5dTMxS4wP8F7laVhf X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:ab50:0:b0:d10:5b67:843c with SMTP id u74-20020a25ab50000000b00d105b67843cmr20552ybi.4.1691536428387; Tue, 08 Aug 2023 16:13:48 -0700 (PDT) Date: Tue, 8 Aug 2023 23:13:29 +0000 In-Reply-To: <20230808231330.3855936-1-rananta@google.com> Mime-Version: 1.0 References: <20230808231330.3855936-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808231330.3855936-14-rananta@google.com> Subject: [PATCH v8 13/14] KVM: arm64: Invalidate the table entries upon a range From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Gavin Shan , Shaoqin Huang Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org Currently, during the operations such as a hugepage collapse, KVM would flush the entire VM's context using 'vmalls12e1is' TLBI operation. Specifically, if the VM is faulting on many hugepages (say after dirty-logging), it creates a performance penalty for the guest whose pages have already been faulted earlier as they would have to refill their TLBs again. Instead, leverage kvm_tlb_flush_vmid_range() for table entries. If the system supports it, only the required range will be flushed. Else, it'll fallback to the previous mechanism. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Gavin Shan Reviewed-by: Shaoqin Huang --- arch/arm64/kvm/hyp/pgtable.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 5d14d5d5819a1..5ef098af17362 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -806,7 +806,8 @@ static bool stage2_try_break_pte(const struct kvm_pgtable_visit_ctx *ctx, * evicted pte value (if any). */ if (kvm_pte_table(ctx->old, ctx->level)) - kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); + kvm_tlb_flush_vmid_range(mmu, ctx->addr, + kvm_granule_size(ctx->level)); else if (kvm_pte_valid(ctx->old)) kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); From patchwork Tue Aug 8 23:13:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13347196 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AB29C04E69 for ; Tue, 8 Aug 2023 23:14:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232326AbjHHXOb (ORCPT ); Tue, 8 Aug 2023 19:14:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49618 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232119AbjHHXOS (ORCPT ); Tue, 8 Aug 2023 19:14:18 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B7972129 for ; Tue, 8 Aug 2023 16:13:50 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d47a02fc63fso4747629276.0 for ; Tue, 08 Aug 2023 16:13:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691536429; x=1692141229; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=tRhPhhyCVrvwh8HFH9MoQf2shI38NIr8PShGiYFRBxo=; b=Zc3XMNBSUBa7GJhE2tyVMoUFP74xH5x2uC+RObeuMypOT4zBfhNjgfLsgmCF/VIUL/ c3YFkKJLvbJ6noh+FEA5EU6B4Y8bvu62qUzHsuhleSUfz8cH5tQ1h2wfzfwa++wsDdyt FdRoKf2t/zrH8Qz4fT0OA2Kl3es/jcoD04x+E+eEDzWE42d8etHaIXAOrwOpk5u/mKfh SD5EM0YbNXhL7Ht7OKRuV/ykqM5iiJ9AioVsAY1rEO1cFvrWt6QecQuf0sR7bTSV6B5Y 5rkKiGP8xtgnes8eLdl7Hzw4R6tX27rsB7R7EcqhdtJaraNhkYwbavnno/QAT62FPCbn dJBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691536429; x=1692141229; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tRhPhhyCVrvwh8HFH9MoQf2shI38NIr8PShGiYFRBxo=; b=bvVHXSvUnXilomKJau5j7q4LHYubsIywC4wTg1h1TFaeo6s1EaGwVALBWdwJT3ev5f /Bhw2zE7RNautWZSPn0iHxZqm4gAJvoKzNwS6if5GFvE9rETuWXVPxWvtv/1hIqHPaet sRJVOiXV06ViEyekdfocliPAnP8zlMdX+AODsI6IF6CA2hPp4VBW1SgYb84cIwW0aaaN RqOWSpiHkDYsRV0Dc3iKsrMceLEat5zUn8EUAGu25tkeJCK6Uux4TRZbdlkIEfOSWOTR urfTrgBqw0wT3AxycZ2iZx9nxf6wUEKmkPViJF8uWIWqaAMa5PZIokCrapQeo4NkcgYK 05hA== X-Gm-Message-State: AOJu0Yx6kxHiHyoNXB7H0ynJ7b18mR0iYlwh6doAis+z8fORLEW71oUr WhnlPXp+s6mtuiVRXV+VIPorQYo9YKYQ X-Google-Smtp-Source: AGHT+IEDJhl5pfGMsCT/iI4kA7cKyLhi3xdcFmiU1wBNjxnpzamybXY7e3K9E1XS3MNVQtxNcdIYmyqlwdlH X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a5b:88c:0:b0:d1e:721b:469d with SMTP id e12-20020a5b088c000000b00d1e721b469dmr20600ybq.7.1691536429525; Tue, 08 Aug 2023 16:13:49 -0700 (PDT) Date: Tue, 8 Aug 2023 23:13:30 +0000 In-Reply-To: <20230808231330.3855936-1-rananta@google.com> Mime-Version: 1.0 References: <20230808231330.3855936-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808231330.3855936-15-rananta@google.com> Subject: [PATCH v8 14/14] KVM: arm64: Use TLBI range-based intructions for unmap From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org The current implementation of the stage-2 unmap walker traverses the given range and, as a part of break-before-make, performs TLB invalidations with a DSB for every PTE. A multitude of this combination could cause a performance bottleneck on some systems. Hence, if the system supports FEAT_TLBIRANGE, defer the TLB invalidations until the entire walk is finished, and then use range-based instructions to invalidate the TLBs in one go. Condition deferred TLB invalidation on the system supporting FWB, as the optimization is entirely pointless when the unmap walker needs to perform CMOs. Rename stage2_put_pte() to stage2_unmap_put_pte() as the function now serves the stage-2 unmap walker specifically, rather than acting generic. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Shaoqin Huang --- arch/arm64/kvm/hyp/pgtable.c | 40 +++++++++++++++++++++++++++++------- 1 file changed, 33 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 5ef098af17362..eaaae76481fa9 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -831,16 +831,36 @@ static void stage2_make_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t n smp_store_release(ctx->ptep, new); } -static void stage2_put_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s2_mmu *mmu, - struct kvm_pgtable_mm_ops *mm_ops) +static bool stage2_unmap_defer_tlb_flush(struct kvm_pgtable *pgt) { /* - * Clear the existing PTE, and perform break-before-make with - * TLB maintenance if it was valid. + * If FEAT_TLBIRANGE is implemented, defer the individual + * TLB invalidations until the entire walk is finished, and + * then use the range-based TLBI instructions to do the + * invalidations. Condition deferred TLB invalidation on the + * system supporting FWB as the optimization is entirely + * pointless when the unmap walker needs to perform CMOs. + */ + return system_supports_tlb_range() && stage2_has_fwb(pgt); +} + +static void stage2_unmap_put_pte(const struct kvm_pgtable_visit_ctx *ctx, + struct kvm_s2_mmu *mmu, + struct kvm_pgtable_mm_ops *mm_ops) +{ + struct kvm_pgtable *pgt = ctx->arg; + + /* + * Clear the existing PTE, and perform break-before-make if it was + * valid. Depending on the system support, defer the TLB maintenance + * for the same until the entire unmap walk is completed. */ if (kvm_pte_valid(ctx->old)) { kvm_clear_pte(ctx->ptep); - kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); + + if (!stage2_unmap_defer_tlb_flush(pgt)) + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, + ctx->addr, ctx->level); } mm_ops->put_page(ctx->ptep); @@ -1098,7 +1118,7 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, * block entry and rely on the remaining portions being faulted * back lazily. */ - stage2_put_pte(ctx, mmu, mm_ops); + stage2_unmap_put_pte(ctx, mmu, mm_ops); if (need_flush && mm_ops->dcache_clean_inval_poc) mm_ops->dcache_clean_inval_poc(kvm_pte_follow(ctx->old, mm_ops), @@ -1112,13 +1132,19 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) { + int ret; struct kvm_pgtable_walker walker = { .cb = stage2_unmap_walker, .arg = pgt, .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, }; - return kvm_pgtable_walk(pgt, addr, size, &walker); + ret = kvm_pgtable_walk(pgt, addr, size, &walker); + if (stage2_unmap_defer_tlb_flush(pgt)) + /* Perform the deferred TLB invalidations */ + kvm_tlb_flush_vmid_range(pgt->mmu, addr, size); + + return ret; } struct stage2_attr_data {