From patchwork Mon Feb 22 15:53:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 12098969 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3355C433DB for ; Mon, 22 Feb 2021 15:56:06 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7A68A64ED6 for ; Mon, 22 Feb 2021 15:56:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7A68A64ED6 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=2MC/YU+GSWnILzoNkqMcfNcXtZ1m6dYXFhfY3LNcC/s=; b=AuAzKbFX5qg5Yv66GR4fbd0MD pvsdjdMHXkU0fpw2c93/HyTtspanp6RLoLJOYFWcU4VTRuWIcSEumCP2BE+5KDpsPCbP3HtsrlKRb lliTAK8NxwFU7n1t1EhbSVyWTkpiLxqx7QNc/bXL7p8V2uzMwsnTWmhIjlv6E+jRisDjNgOFMFgeX ZFFdbxtn7qgYZ4H+0s34d4nO4gpKck59psh+ozqZAJVNvq/3E6jxGR7UGl/UIMJ55LMkNwV4ugqrW Xk3z79SO7cMvvhptkmjvqELcqtPxbbAufU+AS3+azHE/g0OvDoxOLyZpceEOOn0st9pw4XBEVrGuV y4Y39GJhw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1lEDXo-0005zo-9u; Mon, 22 Feb 2021 15:54:44 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1lEDXi-0005xZ-0c for linux-arm-kernel@lists.infradead.org; Mon, 22 Feb 2021 15:54:39 +0000 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4Dkmt04Lnzz7nyT; Mon, 22 Feb 2021 23:52:56 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.47.88.147) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Mon, 22 Feb 2021 23:54:20 +0800 From: Shameer Kolothum To: , , Subject: [RFC PATCH 1/5] vfio: Add a helper to retrieve kvm instance from a dev Date: Mon, 22 Feb 2021 15:53:34 +0000 Message-ID: <20210222155338.26132-2-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210222155338.26132-1-shameerali.kolothum.thodi@huawei.com> References: <20210222155338.26132-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.88.147] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210222_105438_342857_126E222F X-CRM114-Status: GOOD ( 10.18 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jean-philippe@linaro.org, maz@kernel.org, linuxarm@openeuler.org, eric.auger@redhat.com, alex.williamson@redhat.com, prime.zeng@hisilicon.com, jonathan.cameron@huawei.com, zhangfei.gao@linaro.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org A device that belongs to vfio_group has the kvm instance associated with it. Retrieve it. Signed-off-by: Shameer Kolothum --- drivers/vfio/vfio.c | 12 ++++++++++++ include/linux/vfio.h | 1 + 2 files changed, 13 insertions(+) diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c index 2151bc7f87ab..d1968e4bf9f4 100644 --- a/drivers/vfio/vfio.c +++ b/drivers/vfio/vfio.c @@ -876,6 +876,18 @@ struct vfio_device *vfio_device_get_from_dev(struct device *dev) } EXPORT_SYMBOL_GPL(vfio_device_get_from_dev); +struct kvm *vfio_kvm_get_from_dev(struct device *dev) +{ + struct vfio_group *group; + + group = vfio_group_get_from_dev(dev); + if (!group) + return NULL; + + return group->kvm; +} +EXPORT_SYMBOL_GPL(vfio_kvm_get_from_dev); + static struct vfio_device *vfio_device_get_from_name(struct vfio_group *group, char *buf) { diff --git a/include/linux/vfio.h b/include/linux/vfio.h index 38d3c6a8dc7e..8716298fee27 100644 --- a/include/linux/vfio.h +++ b/include/linux/vfio.h @@ -56,6 +56,7 @@ extern void *vfio_del_group_dev(struct device *dev); extern struct vfio_device *vfio_device_get_from_dev(struct device *dev); extern void vfio_device_put(struct vfio_device *device); extern void *vfio_device_data(struct vfio_device *device); +extern struct kvm *vfio_kvm_get_from_dev(struct device *dev); /** * struct vfio_iommu_driver_ops - VFIO IOMMU driver callbacks From patchwork Mon Feb 22 15:53:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 12098971 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0826DC433E6 for ; Mon, 22 Feb 2021 15:56:08 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 93C8B64EC3 for ; Mon, 22 Feb 2021 15:56:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 93C8B64EC3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=DbdkSpUjCs+mkkJMUIkBGKZ6zBxj+zmY2xKqGmjXnoc=; b=ekWFvx+nTbzSZ0Mxnr1dFc2Ke UcuNUrSzSUpnq2mqbdzWYjO36GawxaOjWYwf8Gw8AsN8tgFNiSbtfkT/kFDUsBaY3+DKMhTw9dS7U cePt2QuvlmrAIN/nNBgljjAPz2VnDDS5+rcl4PQSWlByJgcs+xhHXPJ6NfkUMe+TohzcN3o27T3JH r+az+72I0/UFVh2fuQ8vOjBD6V6MxYqKV5sn2NAWK2Ed2ZIutMCOrebvDJW5IFweH0M/iBF6omZB0 iYJ+9aYE/Z8LLEIf7AdyeiiDZj1PSxTsGESsLZK43sdoB0vgK/ZiMr+CtBZiJD5vlAE9yCuE/Nr3m qA6/DoD2A==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1lEDXp-000602-PQ; Mon, 22 Feb 2021 15:54:45 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1lEDXk-0005xv-KK for linux-arm-kernel@lists.infradead.org; Mon, 22 Feb 2021 15:54:42 +0000 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4Dkmt54vTmz7p7t; Mon, 22 Feb 2021 23:53:01 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.47.88.147) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Mon, 22 Feb 2021 23:54:26 +0800 From: Shameer Kolothum To: , , Subject: [RFC PATCH 2/5] KVM: Add generic infrastructure to support pinned VMIDs Date: Mon, 22 Feb 2021 15:53:35 +0000 Message-ID: <20210222155338.26132-3-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210222155338.26132-1-shameerali.kolothum.thodi@huawei.com> References: <20210222155338.26132-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.88.147] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210222_105440_985111_474E1321 X-CRM114-Status: GOOD ( 12.73 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jean-philippe@linaro.org, maz@kernel.org, linuxarm@openeuler.org, eric.auger@redhat.com, alex.williamson@redhat.com, prime.zeng@hisilicon.com, jonathan.cameron@huawei.com, zhangfei.gao@linaro.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Provide generic helper functions to get/put pinned VMIDs if the arch supports it. Signed-off-by: Shameer Kolothum --- include/linux/kvm_host.h | 17 +++++++++++++++++ virt/kvm/Kconfig | 2 ++ virt/kvm/kvm_main.c | 25 +++++++++++++++++++++++++ 3 files changed, 44 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 7f2e2a09ebbd..25856db74a08 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -836,6 +836,8 @@ bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu); void kvm_vcpu_kick(struct kvm_vcpu *vcpu); int kvm_vcpu_yield_to(struct kvm_vcpu *target); void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu, bool usermode_vcpu_not_eligible); +int kvm_pinned_vmid_get(struct device *dev); +int kvm_pinned_vmid_put(struct device *dev); void kvm_flush_remote_tlbs(struct kvm *kvm); void kvm_reload_remote_mmus(struct kvm *kvm); @@ -1478,4 +1480,19 @@ static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu) } #endif /* CONFIG_KVM_XFER_TO_GUEST_WORK */ +#ifdef CONFIG_HAVE_KVM_PINNED_VMID +int kvm_arch_pinned_vmid_get(struct kvm *kvm); +int kvm_arch_pinned_vmid_put(struct kvm *kvm); +#else +static inline int kvm_arch_pinned_vmid_get(struct kvm *kvm) +{ + return -EINVAL; +} + +static inline int kvm_arch_pinned_vmid_put(struct kvm *kvm) +{ + return -EINVAL; +} +#endif + #endif diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 1c37ccd5d402..bb55616c5616 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -63,3 +63,5 @@ config HAVE_KVM_NO_POLL config KVM_XFER_TO_GUEST_WORK bool +config HAVE_KVM_PINNED_VMID + bool diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2541a17ff1c4..632d391f0e34 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -51,6 +51,7 @@ #include #include #include +#include #include #include @@ -2849,6 +2850,30 @@ bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu) } EXPORT_SYMBOL_GPL(kvm_vcpu_wake_up); +int kvm_pinned_vmid_get(struct device *dev) +{ + struct kvm *kvm; + + kvm = vfio_kvm_get_from_dev(dev); + if (!kvm) + return -EINVAL; + + return kvm_arch_pinned_vmid_get(kvm); +} +EXPORT_SYMBOL_GPL(kvm_pinned_vmid_get); + +int kvm_pinned_vmid_put(struct device *dev) +{ + struct kvm *kvm; + + kvm = vfio_kvm_get_from_dev(dev); + if (!kvm) + return -EINVAL; + + return kvm_arch_pinned_vmid_put(kvm); +} +EXPORT_SYMBOL_GPL(kvm_pinned_vmid_put); + #ifndef CONFIG_S390 /* * Kick a sleeping VCPU, or a guest VCPU in guest mode, into host kernel mode. From patchwork Mon Feb 22 15:53:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 12098977 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 631C4C433DB for ; Mon, 22 Feb 2021 15:56:18 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F017564E40 for ; Mon, 22 Feb 2021 15:56:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F017564E40 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=fr8Li+VjMZ/shTRppDF8I1DUvYKbM8sMSBssDz1WzeA=; b=ASHjhqXrjXg0bs+RUPUe1NMZv THd95x1OSwdAVrMFzJXTpUxgc5zBs94y1/NneFtp1znUp9mEcZIJ4yFmGfwMLG29Q4gxEwyC/KqAb uITZEYy3AYUVRe+vy9cuEUj/NXk4j296Vts3AT8jU4ULCnvhx2Ky9/vRbCGlwth3Vto4iOHtOTRFO J9NyOB6q1R8dLmxOBr1RIeKHulxLs85kHJkdCd+LjI6fuG9czKi2/X29OoDDAnzeb9kyhW2EAr41T FyaHIPoiLvGX3gcJclJm4mN+wJSQNAt0m/UU3Hq18fZT4Ea0dYJB5cV1f00ORik/M5Rx0YCpfIZK0 Zd3VqO4Pg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1lEDY8-00065D-Hd; Mon, 22 Feb 2021 15:55:04 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1lEDXx-0005zf-KD for linux-arm-kernel@lists.infradead.org; Mon, 22 Feb 2021 15:54:55 +0000 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4DkmtB5MCmz7p80; Mon, 22 Feb 2021 23:53:06 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.47.88.147) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Mon, 22 Feb 2021 23:54:31 +0800 From: Shameer Kolothum To: , , Subject: [RFC PATCH 3/5] KVM: ARM64: Add support for pinned VMIDs Date: Mon, 22 Feb 2021 15:53:36 +0000 Message-ID: <20210222155338.26132-4-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210222155338.26132-1-shameerali.kolothum.thodi@huawei.com> References: <20210222155338.26132-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.88.147] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210222_105454_714631_95E0B438 X-CRM114-Status: GOOD ( 19.38 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jean-philippe@linaro.org, maz@kernel.org, linuxarm@openeuler.org, eric.auger@redhat.com, alex.williamson@redhat.com, prime.zeng@hisilicon.com, jonathan.cameron@huawei.com, zhangfei.gao@linaro.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On an ARM64 system with a SMMUv3 implementation that fully supports Broadcast TLB Maintenance(BTM) feature, the CPU TLB invalidate instructions are received by SMMU. This is very useful when the SMMU shares the page tables with the CPU(eg: Guest SVA use case). For this to work, the SMMU must use the same VMID that is allocated by KVM to configure the stage 2 translations. At present KVM VMID allocations are recycled on rollover and may change as a result. This will create issues if we have to share the KVM VMID with SMMU. Hence, we spilt the KVM VMID space into two, the first half follows the normal recycle on rollover policy while the second half of the VMID pace is used to allocate pinned VMIDs. This feature is enabled based on a command line option "kvm-arm.pinned_vmid_enable". Signed-off-by: Shameer Kolothum --- arch/arm64/include/asm/kvm_host.h | 2 + arch/arm64/kvm/Kconfig | 1 + arch/arm64/kvm/arm.c | 104 +++++++++++++++++++++++++++++- 3 files changed, 106 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 0cd9f0f75c13..db6441c6a580 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -25,6 +25,7 @@ #include #include #include +#include #include #define __KVM_HAVE_ARCH_INTC_INITIALIZED @@ -65,6 +66,7 @@ struct kvm_vmid { /* The VMID generation used for the virt. memory system */ u64 vmid_gen; u32 vmid; + refcount_t pinned; }; struct kvm_s2_mmu { diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 043756db8f6e..c5c52953e842 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -40,6 +40,7 @@ menuconfig KVM select HAVE_KVM_VCPU_RUN_PID_CHANGE select TASKSTATS select TASK_DELAY_ACCT + select HAVE_KVM_PINNED_VMID help Support hosting virtualized guest machines. diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index c0ffb019ca8b..8955968be49f 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -56,6 +56,19 @@ static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1); static u32 kvm_next_vmid; static DEFINE_SPINLOCK(kvm_vmid_lock); +static bool kvm_pinned_vmid_enable; + +static int __init early_pinned_vmid_enable(char *buf) +{ + return strtobool(buf, &kvm_pinned_vmid_enable); +} + +early_param("kvm-arm.pinned_vmid_enable", early_pinned_vmid_enable); + +static DEFINE_IDA(kvm_pinned_vmids); +static u32 kvm_pinned_vmid_start; +static u32 kvm_pinned_vmid_end; + static bool vgic_present; static DEFINE_PER_CPU(unsigned char, kvm_arm_hardware_enabled); @@ -475,6 +488,10 @@ void force_vm_exit(const cpumask_t *mask) static bool need_new_vmid_gen(struct kvm_vmid *vmid) { u64 current_vmid_gen = atomic64_read(&kvm_vmid_gen); + + if (refcount_read(&vmid->pinned)) + return false; + smp_rmb(); /* Orders read of kvm_vmid_gen and kvm->arch.vmid */ return unlikely(READ_ONCE(vmid->vmid_gen) != current_vmid_gen); } @@ -485,6 +502,8 @@ static bool need_new_vmid_gen(struct kvm_vmid *vmid) */ static void update_vmid(struct kvm_vmid *vmid) { + unsigned int vmid_bits; + if (!need_new_vmid_gen(vmid)) return; @@ -521,7 +540,12 @@ static void update_vmid(struct kvm_vmid *vmid) vmid->vmid = kvm_next_vmid; kvm_next_vmid++; - kvm_next_vmid &= (1 << kvm_get_vmid_bits()) - 1; + if (kvm_pinned_vmid_enable) + vmid_bits = kvm_get_vmid_bits() - 1; + else + vmid_bits = kvm_get_vmid_bits(); + + kvm_next_vmid &= (1 << vmid_bits) - 1; smp_wmb(); WRITE_ONCE(vmid->vmid_gen, atomic64_read(&kvm_vmid_gen)); @@ -569,6 +593,71 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) return ret; } +int kvm_arch_pinned_vmid_get(struct kvm *kvm) +{ + struct kvm_vcpu *vcpu; + struct kvm_vmid *kvm_vmid; + int ret; + + if (!kvm_pinned_vmid_enable || !atomic_read(&kvm->online_vcpus)) + return -EINVAL; + + vcpu = kvm_get_vcpu(kvm, 0); + if (!vcpu) + return -EINVAL; + + kvm_vmid = &vcpu->arch.hw_mmu->vmid; + + spin_lock(&kvm_vmid_lock); + + if (refcount_inc_not_zero(&kvm_vmid->pinned)) { + spin_unlock(&kvm_vmid_lock); + return kvm_vmid->vmid; + } + + ret = ida_alloc_range(&kvm_pinned_vmids, kvm_pinned_vmid_start, + kvm_pinned_vmid_end, GFP_KERNEL); + if (ret < 0) { + spin_unlock(&kvm_vmid_lock); + return ret; + } + + force_vm_exit(cpu_all_mask); + kvm_call_hyp(__kvm_flush_vm_context); + + kvm_vmid->vmid = (u32)ret; + refcount_set(&kvm_vmid->pinned, 1); + spin_unlock(&kvm_vmid_lock); + + return ret; +} + +int kvm_arch_pinned_vmid_put(struct kvm *kvm) +{ + struct kvm_vcpu *vcpu; + struct kvm_vmid *kvm_vmid; + + if (!kvm_pinned_vmid_enable) + return -EINVAL; + + vcpu = kvm_get_vcpu(kvm, 0); + if (!vcpu) + return -EINVAL; + + kvm_vmid = &vcpu->arch.hw_mmu->vmid; + + spin_lock(&kvm_vmid_lock); + + if (!refcount_read(&kvm_vmid->pinned)) + goto out; + + if (refcount_dec_and_test(&kvm_vmid->pinned)) + ida_free(&kvm_pinned_vmids, kvm_vmid->vmid); +out: + spin_unlock(&kvm_vmid_lock); + return 0; +} + bool kvm_arch_intc_initialized(struct kvm *kvm) { return vgic_initialized(kvm); @@ -1680,6 +1769,16 @@ static void check_kvm_target_cpu(void *ret) *(int *)ret = kvm_target_cpu(); } +static void kvm_arm_pinned_vmid_init(void) +{ + unsigned int vmid_bits = kvm_get_vmid_bits(); + + kvm_pinned_vmid_start = (1 << (vmid_bits - 1)); + kvm_pinned_vmid_end = (1 << vmid_bits) - 1; + + kvm_info("Pinned VMID[0x%x - 0x%x] enabled\n", kvm_pinned_vmid_start, kvm_pinned_vmid_end); +} + struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr) { struct kvm_vcpu *vcpu; @@ -1790,6 +1889,9 @@ int kvm_arch_init(void *opaque) else kvm_info("Hyp mode initialized successfully\n"); + if (kvm_pinned_vmid_enable) + kvm_arm_pinned_vmid_init(); + return 0; out_hyp: From patchwork Mon Feb 22 15:53:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 12098975 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23C80C433DB for ; Mon, 22 Feb 2021 15:56:16 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BCB7464E40 for ; Mon, 22 Feb 2021 15:56:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BCB7464E40 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=yZDMbXnpjZaNnrPt6ng9ZaeKTjlrra76J1ppx6QwGu4=; b=Y8LL9sJ6bOiXOR7WpuZhWm6Qo OOCpYYTBfVHY3RJVsdeQrOMT7iU8clX0mdnN/fNpwGOt4Y+n5LeQV5MyjeFiHBnFGhovxLMyr01wL j4hZcmH3UJybHprEeuYFB/vLTFyXH3wUAZkYRZR+q5BVAY6fZnLSqFlcE4lidUEC92mJFmMbuARUm XpbjErubWlywemFgN6xn5wnKZlxkVeqnsorc3lUJJJ6WFqmd6rdlKUjkCoE4XKJ93oAC0GQfQ6nYX E7WteU3/j88N38jn80MDs5Iua9MsWfHcx3+wXhLezPovSLIvrdPXZl1CX/2nJAEj8H+Ls5LEMknS0 Ot3fZyJYg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1lEDY0-00062V-CW; Mon, 22 Feb 2021 15:54:56 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1lEDXv-00060d-MD for linux-arm-kernel@lists.infradead.org; Mon, 22 Feb 2021 15:54:53 +0000 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Dkmsp5cdxzlNSL; Mon, 22 Feb 2021 23:52:46 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.47.88.147) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Mon, 22 Feb 2021 23:54:37 +0800 From: Shameer Kolothum To: , , Subject: [RFC PATCH 4/5] iommu/arm-smmu-v3: Use pinned VMID for NESTED stage with BTM Date: Mon, 22 Feb 2021 15:53:37 +0000 Message-ID: <20210222155338.26132-5-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210222155338.26132-1-shameerali.kolothum.thodi@huawei.com> References: <20210222155338.26132-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.88.147] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210222_105452_253851_A4A064AC X-CRM114-Status: GOOD ( 14.23 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jean-philippe@linaro.org, maz@kernel.org, linuxarm@openeuler.org, eric.auger@redhat.com, alex.williamson@redhat.com, prime.zeng@hisilicon.com, jonathan.cameron@huawei.com, zhangfei.gao@linaro.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org If the SMMU supports BTM and the device belongs to NESTED domain with shared pasid table, we need to use the VMID allocated by the KVM for the s2 configuration. Hence, request a pinned VMID from KVM. Signed-off-by: Shameer Kolothum --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 49 ++++++++++++++++++++- 1 file changed, 47 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 26bf7da1bcd0..04f83f7c8319 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -28,6 +28,7 @@ #include #include #include +#include #include @@ -2195,6 +2196,33 @@ static void arm_smmu_bitmap_free(unsigned long *map, int idx) clear_bit(idx, map); } +static int arm_smmu_pinned_vmid_get(struct arm_smmu_domain *smmu_domain) +{ + struct arm_smmu_master *master; + + master = list_first_entry_or_null(&smmu_domain->devices, + struct arm_smmu_master, domain_head); + if (!master) + return -EINVAL; + + return kvm_pinned_vmid_get(master->dev); +} + +static int arm_smmu_pinned_vmid_put(struct arm_smmu_domain *smmu_domain) +{ + struct arm_smmu_master *master; + + master = list_first_entry_or_null(&smmu_domain->devices, + struct arm_smmu_master, domain_head); + if (!master) + return -EINVAL; + + if (smmu_domain->s2_cfg.vmid) + return kvm_pinned_vmid_put(master->dev); + + return 0; +} + static void arm_smmu_domain_free(struct iommu_domain *domain) { struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); @@ -2215,8 +2243,11 @@ static void arm_smmu_domain_free(struct iommu_domain *domain) mutex_unlock(&arm_smmu_asid_lock); } if (s2_cfg->set) { - if (s2_cfg->vmid) - arm_smmu_bitmap_free(smmu->vmid_map, s2_cfg->vmid); + if (s2_cfg->vmid) { + if (!(smmu->features & ARM_SMMU_FEAT_BTM) && + smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED) + arm_smmu_bitmap_free(smmu->vmid_map, s2_cfg->vmid); + } } kfree(smmu_domain); @@ -3199,6 +3230,17 @@ static int arm_smmu_attach_pasid_table(struct iommu_domain *domain, !(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB)) goto out; + if (smmu->features & ARM_SMMU_FEAT_BTM) { + ret = arm_smmu_pinned_vmid_get(smmu_domain); + if (ret < 0) + goto out; + + if (smmu_domain->s2_cfg.vmid) + arm_smmu_bitmap_free(smmu->vmid_map, smmu_domain->s2_cfg.vmid); + + smmu_domain->s2_cfg.vmid = (u16)ret; + } + smmu_domain->s1_cfg.cdcfg.cdtab_dma = cfg->base_ptr; smmu_domain->s1_cfg.s1cdmax = cfg->pasid_bits; smmu_domain->s1_cfg.s1fmt = cfg->vendor_data.smmuv3.s1fmt; @@ -3221,6 +3263,7 @@ static int arm_smmu_attach_pasid_table(struct iommu_domain *domain, static void arm_smmu_detach_pasid_table(struct iommu_domain *domain) { struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct arm_smmu_device *smmu = smmu_domain->smmu; struct arm_smmu_master *master; unsigned long flags; @@ -3237,6 +3280,8 @@ static void arm_smmu_detach_pasid_table(struct iommu_domain *domain) arm_smmu_install_ste_for_dev(master); spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); + if (smmu->features & ARM_SMMU_FEAT_BTM) + arm_smmu_pinned_vmid_put(smmu_domain); unlock: mutex_unlock(&smmu_domain->init_mutex); } From patchwork Mon Feb 22 15:53:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 12098979 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A29BAC433E0 for ; Mon, 22 Feb 2021 15:56:20 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3ED6564EC3 for ; Mon, 22 Feb 2021 15:56:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3ED6564EC3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=QAadsoyms0xYclsodjaOe8c4Zsz9Lbz45IdKGjKkBjs=; b=rASh/stXOoSejaONDwOc83k5Z 952s7VI6SiGcI571Ow74hRwiPfaH03uSFeahhuiNdKN2DtXbUkriymuK39BPQAcPnd/QsEUUr1OJK o2xyrr1jELsF48cf2VSeOZFQO3itqTM/fIeEyqTKiV3Qdom75k33KumKn9xdjd1/JDvFrGvlf221N xNrRNzpm3MM9EHcP6xX873oPY197y100Fg3DJcCseSPBuHcfyv+B2JAHJkdbN2gLaj3mcQLF92NRP Oa7ktYhi5SaSiPLXLX+kYvYz45Yki5kZJQNbj322ZrjESkiSNKYU5ewx5qxmDTcHZK88Xwb7S6P64 RhtjprztQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1lEDYF-00067V-Kf; Mon, 22 Feb 2021 15:55:11 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1lEDXz-00061I-MT for linux-arm-kernel@lists.infradead.org; Mon, 22 Feb 2021 15:54:57 +0000 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4Dkmtk25jGzjPyy; Mon, 22 Feb 2021 23:53:34 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.47.88.147) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Mon, 22 Feb 2021 23:54:43 +0800 From: Shameer Kolothum To: , , Subject: [RFC PATCH 5/5] KVM: arm64: Make sure pinned vmid is released on VM exit Date: Mon, 22 Feb 2021 15:53:38 +0000 Message-ID: <20210222155338.26132-6-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210222155338.26132-1-shameerali.kolothum.thodi@huawei.com> References: <20210222155338.26132-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.88.147] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210222_105456_098864_F87B47C0 X-CRM114-Status: GOOD ( 10.81 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jean-philippe@linaro.org, maz@kernel.org, linuxarm@openeuler.org, eric.auger@redhat.com, alex.williamson@redhat.com, prime.zeng@hisilicon.com, jonathan.cameron@huawei.com, zhangfei.gao@linaro.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Since the pinned VMID space is not recycled, we need to make sure that we release the vmid back into the pool when we are done with it. Signed-off-by: Shameer Kolothum --- arch/arm64/kvm/arm.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 8955968be49f..d9900ffb88f4 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -181,8 +181,16 @@ void kvm_arch_destroy_vm(struct kvm *kvm) kvm_vgic_destroy(kvm); for (i = 0; i < KVM_MAX_VCPUS; ++i) { - if (kvm->vcpus[i]) { - kvm_vcpu_destroy(kvm->vcpus[i]); + struct kvm_vcpu *vcpu = kvm->vcpus[i]; + + if (vcpu) { + struct kvm_vmid *vmid = &vcpu->arch.hw_mmu->vmid; + + if (refcount_read(&vmid->pinned)) { + ida_free(&kvm_pinned_vmids, vmid->vmid); + refcount_set(&vmid->pinned, 0); + } + kvm_vcpu_destroy(vcpu); kvm->vcpus[i] = NULL; } }