From patchwork Wed Sep 27 09:09:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13400414 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A54CE810AD for ; Wed, 27 Sep 2023 09:09:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230314AbjI0JJ2 (ORCPT ); Wed, 27 Sep 2023 05:09:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230123AbjI0JJV (ORCPT ); Wed, 27 Sep 2023 05:09:21 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3D358A3 for ; Wed, 27 Sep 2023 02:09:19 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AAAC9C433C7; Wed, 27 Sep 2023 09:09:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695805758; bh=LTQi+VhjCvaNyF/flLPnk65Agr514936jHAtuCoD7+g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=J9gbOmqzW9rcYZa6yQp9gUcYY+916trg5iK8CuI00QUcfS98oVa//4I1e3bvSO04w 1gImU724til9ZshCiwtOSfTUhwx68+8YhHd5FSHybbS04FdEFQbV2F6HCYhtf+VopZ UZ2khhNokSKtGdkrQ1UfJk0qXxgDlQS4S2zM3dp0X/UcV0zrlPe4otQy5HueCKPGPZ Ip5yrkfwlfr6x4GOnA7Am8OD3iYYyb6xa/q6+BY41ria91bHlLaAszNG6Mk+flP3Je xbhcZt1ufuYrInNP4Bz/q42e6jGpF/rLtY/8U5p4cwKFGzsoNSiuTFnS/qLzcFOig7 6GXF6xcdtbrig== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1qlQXk-00GaLb-DD; Wed, 27 Sep 2023 10:09:16 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Shameerali Kolothum Thodi , Xu Zhao , Eric Auger Subject: [PATCH v3 01/11] KVM: arm64: vgic: Make kvm_vgic_inject_irq() take a vcpu pointer Date: Wed, 27 Sep 2023 10:09:01 +0100 Message-Id: <20230927090911.3355209-2-maz@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230927090911.3355209-1-maz@kernel.org> References: <20230927090911.3355209-1-maz@kernel.org> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, shameerali.kolothum.thodi@huawei.com, zhaoxu.35@bytedance.com, eric.auger@redhat.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Passing a vcpu_id to kvm_vgic_inject_irq() is silly for two reasons: - we often confuse vcpu_id and vcpu_idx - we eventually have to convert it back to a vcpu - we can't count Instead, pass a vcpu pointer, which is unambiguous. A NULL vcpu is also allowed for interrupts that are not private to a vcpu (such as SPIs). Reviewed-by: Zenghui Yu Signed-off-by: Marc Zyngier --- arch/arm64/kvm/arch_timer.c | 2 +- arch/arm64/kvm/arm.c | 23 ++++++++--------------- arch/arm64/kvm/pmu-emul.c | 2 +- arch/arm64/kvm/vgic/vgic-irqfd.c | 2 +- arch/arm64/kvm/vgic/vgic.c | 12 +++++------- include/kvm/arm_vgic.h | 4 ++-- 6 files changed, 18 insertions(+), 27 deletions(-) diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c index 6dcdae4d38cb..1f828f3b854c 100644 --- a/arch/arm64/kvm/arch_timer.c +++ b/arch/arm64/kvm/arch_timer.c @@ -458,7 +458,7 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level, timer_ctx->irq.level); if (!userspace_irqchip(vcpu->kvm)) { - ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, + ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu, timer_irq(timer_ctx), timer_ctx->irq.level, timer_ctx); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 4866b3f7b4ea..fa21fb15e927 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1134,27 +1134,23 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level, bool line_status) { u32 irq = irq_level->irq; - unsigned int irq_type, vcpu_idx, irq_num; - int nrcpus = atomic_read(&kvm->online_vcpus); + unsigned int irq_type, vcpu_id, irq_num; struct kvm_vcpu *vcpu = NULL; bool level = irq_level->level; irq_type = (irq >> KVM_ARM_IRQ_TYPE_SHIFT) & KVM_ARM_IRQ_TYPE_MASK; - vcpu_idx = (irq >> KVM_ARM_IRQ_VCPU_SHIFT) & KVM_ARM_IRQ_VCPU_MASK; - vcpu_idx += ((irq >> KVM_ARM_IRQ_VCPU2_SHIFT) & KVM_ARM_IRQ_VCPU2_MASK) * (KVM_ARM_IRQ_VCPU_MASK + 1); + vcpu_id = (irq >> KVM_ARM_IRQ_VCPU_SHIFT) & KVM_ARM_IRQ_VCPU_MASK; + vcpu_id += ((irq >> KVM_ARM_IRQ_VCPU2_SHIFT) & KVM_ARM_IRQ_VCPU2_MASK) * (KVM_ARM_IRQ_VCPU_MASK + 1); irq_num = (irq >> KVM_ARM_IRQ_NUM_SHIFT) & KVM_ARM_IRQ_NUM_MASK; - trace_kvm_irq_line(irq_type, vcpu_idx, irq_num, irq_level->level); + trace_kvm_irq_line(irq_type, vcpu_id, irq_num, irq_level->level); switch (irq_type) { case KVM_ARM_IRQ_TYPE_CPU: if (irqchip_in_kernel(kvm)) return -ENXIO; - if (vcpu_idx >= nrcpus) - return -EINVAL; - - vcpu = kvm_get_vcpu(kvm, vcpu_idx); + vcpu = kvm_get_vcpu_by_id(kvm, vcpu_id); if (!vcpu) return -EINVAL; @@ -1166,17 +1162,14 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level, if (!irqchip_in_kernel(kvm)) return -ENXIO; - if (vcpu_idx >= nrcpus) - return -EINVAL; - - vcpu = kvm_get_vcpu(kvm, vcpu_idx); + vcpu = kvm_get_vcpu_by_id(kvm, vcpu_id); if (!vcpu) return -EINVAL; if (irq_num < VGIC_NR_SGIS || irq_num >= VGIC_NR_PRIVATE_IRQS) return -EINVAL; - return kvm_vgic_inject_irq(kvm, vcpu->vcpu_id, irq_num, level, NULL); + return kvm_vgic_inject_irq(kvm, vcpu, irq_num, level, NULL); case KVM_ARM_IRQ_TYPE_SPI: if (!irqchip_in_kernel(kvm)) return -ENXIO; @@ -1184,7 +1177,7 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level, if (irq_num < VGIC_NR_PRIVATE_IRQS) return -EINVAL; - return kvm_vgic_inject_irq(kvm, 0, irq_num, level, NULL); + return kvm_vgic_inject_irq(kvm, NULL, irq_num, level, NULL); } return -EINVAL; diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 6b066e04dc5d..3afb281ed8d2 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -348,7 +348,7 @@ static void kvm_pmu_update_state(struct kvm_vcpu *vcpu) pmu->irq_level = overflow; if (likely(irqchip_in_kernel(vcpu->kvm))) { - int ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, + int ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu, pmu->irq_num, overflow, pmu); WARN_ON(ret); } diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c b/arch/arm64/kvm/vgic/vgic-irqfd.c index 475059bacedf..8c711deb25aa 100644 --- a/arch/arm64/kvm/vgic/vgic-irqfd.c +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c @@ -23,7 +23,7 @@ static int vgic_irqfd_set_irq(struct kvm_kernel_irq_routing_entry *e, if (!vgic_valid_spi(kvm, spi_id)) return -EINVAL; - return kvm_vgic_inject_irq(kvm, 0, spi_id, level, NULL); + return kvm_vgic_inject_irq(kvm, NULL, spi_id, level, NULL); } /** diff --git a/arch/arm64/kvm/vgic/vgic.c b/arch/arm64/kvm/vgic/vgic.c index 8be4c1ebdec2..db2a95762b1b 100644 --- a/arch/arm64/kvm/vgic/vgic.c +++ b/arch/arm64/kvm/vgic/vgic.c @@ -422,7 +422,7 @@ bool vgic_queue_irq_unlock(struct kvm *kvm, struct vgic_irq *irq, /** * kvm_vgic_inject_irq - Inject an IRQ from a device to the vgic * @kvm: The VM structure pointer - * @cpuid: The CPU for PPIs + * @vcpu: The CPU for PPIs or NULL for global interrupts * @intid: The INTID to inject a new state to. * @level: Edge-triggered: true: to trigger the interrupt * false: to ignore the call @@ -436,24 +436,22 @@ bool vgic_queue_irq_unlock(struct kvm *kvm, struct vgic_irq *irq, * level-sensitive interrupts. You can think of the level parameter as 1 * being HIGH and 0 being LOW and all devices being active-HIGH. */ -int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int intid, - bool level, void *owner) +int kvm_vgic_inject_irq(struct kvm *kvm, struct kvm_vcpu *vcpu, + unsigned int intid, bool level, void *owner) { - struct kvm_vcpu *vcpu; struct vgic_irq *irq; unsigned long flags; int ret; - trace_vgic_update_irq_pending(cpuid, intid, level); - ret = vgic_lazy_init(kvm); if (ret) return ret; - vcpu = kvm_get_vcpu(kvm, cpuid); if (!vcpu && intid < VGIC_NR_PRIVATE_IRQS) return -EINVAL; + trace_vgic_update_irq_pending(vcpu ? vcpu->vcpu_idx : 0, intid, level); + irq = vgic_get_irq(kvm, vcpu, intid); if (!irq) return -EINVAL; diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h index 5b27f94d4fad..8cc38e836f54 100644 --- a/include/kvm/arm_vgic.h +++ b/include/kvm/arm_vgic.h @@ -375,8 +375,8 @@ int kvm_vgic_map_resources(struct kvm *kvm); int kvm_vgic_hyp_init(void); void kvm_vgic_init_cpu_hardware(void); -int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int intid, - bool level, void *owner); +int kvm_vgic_inject_irq(struct kvm *kvm, struct kvm_vcpu *vcpu, + unsigned int intid, bool level, void *owner); int kvm_vgic_map_phys_irq(struct kvm_vcpu *vcpu, unsigned int host_irq, u32 vintid, struct irq_ops *ops); int kvm_vgic_unmap_phys_irq(struct kvm_vcpu *vcpu, unsigned int vintid); From patchwork Wed Sep 27 09:09:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13400415 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4BD7FE810B1 for ; Wed, 27 Sep 2023 09:09:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230329AbjI0JJa (ORCPT ); Wed, 27 Sep 2023 05:09:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55732 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230190AbjI0JJV (ORCPT ); Wed, 27 Sep 2023 05:09:21 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 53768BF for ; Wed, 27 Sep 2023 02:09:19 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id ED3E9C433CA; Wed, 27 Sep 2023 09:09:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695805759; bh=k1vK2l5uy6QPUr7z0mkBh4JWQyKPl6kf9ryuSnEUE1w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WEX7ydZjwIxb2aNA3/a/s2swNoambupqWeHQks9eVlPtHOnGpr9mYz/jPYnQZuznG fcAn93oqQY/yueClV9yZdOTePaZ8mD8caJbN2/sbdipnG4UmdrAGrMh4OXzwQDfJ0Q uAUl1f2r9O2UdUrAr6jOOCXUIxfyxcOWkL03AZU1Fc0nI7UMDn1N9TwKQJiww2qoHA Hh4Z5aTyAlSbogzIUYxGaO7Mj65aV0IVJsj6DZAKyATgcDiLAdIP2WpEVM1UlLL1aB s7WMEyEaiTGLYi4XtE+wPUXaPwwSsMQUXNOMzYdotkKJNf1pq1aoAVJxIca+XdOYfm /g3LqtFbXX5Bg== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1qlQXk-00GaLb-JP; Wed, 27 Sep 2023 10:09:16 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Shameerali Kolothum Thodi , Xu Zhao , Eric Auger Subject: [PATCH v3 02/11] KVM: arm64: vgic-its: Treat the collection target address as a vcpu_id Date: Wed, 27 Sep 2023 10:09:02 +0100 Message-Id: <20230927090911.3355209-3-maz@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230927090911.3355209-1-maz@kernel.org> References: <20230927090911.3355209-1-maz@kernel.org> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, shameerali.kolothum.thodi@huawei.com, zhaoxu.35@bytedance.com, eric.auger@redhat.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Since our emulated ITS advertises GITS_TYPER.PTA=0, the target address associated to a collection is a PE number and not an address. So far, so good. However, the PE number is what userspace has provided given us (aka the vcpu_id), and not the internal vcpu index. Make sure we consistently retrieve the vcpu by ID rather than by index, adding a helper that deals with most of the cases. We also get rid of the pointless (and bogus) comparisons to online_vcpus, which don't really make sense. Reviewed-by: Zenghui Yu Signed-off-by: Marc Zyngier --- arch/arm64/kvm/vgic/vgic-its.c | 49 +++++++++++++++++++--------------- 1 file changed, 27 insertions(+), 22 deletions(-) diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c index 5fe2365a629f..2dad2d095160 100644 --- a/arch/arm64/kvm/vgic/vgic-its.c +++ b/arch/arm64/kvm/vgic/vgic-its.c @@ -378,6 +378,12 @@ static int update_affinity(struct vgic_irq *irq, struct kvm_vcpu *vcpu) return ret; } +static struct kvm_vcpu *collection_to_vcpu(struct kvm *kvm, + struct its_collection *col) +{ + return kvm_get_vcpu_by_id(kvm, col->target_addr); +} + /* * Promotes the ITS view of affinity of an ITTE (which redistributor this LPI * is targeting) to the VGIC's view, which deals with target VCPUs. @@ -391,7 +397,7 @@ static void update_affinity_ite(struct kvm *kvm, struct its_ite *ite) if (!its_is_collection_mapped(ite->collection)) return; - vcpu = kvm_get_vcpu(kvm, ite->collection->target_addr); + vcpu = collection_to_vcpu(kvm, ite->collection); update_affinity(ite->irq, vcpu); } @@ -679,7 +685,7 @@ int vgic_its_resolve_lpi(struct kvm *kvm, struct vgic_its *its, if (!ite || !its_is_collection_mapped(ite->collection)) return E_ITS_INT_UNMAPPED_INTERRUPT; - vcpu = kvm_get_vcpu(kvm, ite->collection->target_addr); + vcpu = collection_to_vcpu(kvm, ite->collection); if (!vcpu) return E_ITS_INT_UNMAPPED_INTERRUPT; @@ -887,7 +893,7 @@ static int vgic_its_cmd_handle_movi(struct kvm *kvm, struct vgic_its *its, return E_ITS_MOVI_UNMAPPED_COLLECTION; ite->collection = collection; - vcpu = kvm_get_vcpu(kvm, collection->target_addr); + vcpu = collection_to_vcpu(kvm, collection); vgic_its_invalidate_cache(kvm); @@ -1121,7 +1127,7 @@ static int vgic_its_cmd_handle_mapi(struct kvm *kvm, struct vgic_its *its, } if (its_is_collection_mapped(collection)) - vcpu = kvm_get_vcpu(kvm, collection->target_addr); + vcpu = collection_to_vcpu(kvm, collection); irq = vgic_add_lpi(kvm, lpi_nr, vcpu); if (IS_ERR(irq)) { @@ -1242,21 +1248,22 @@ static int vgic_its_cmd_handle_mapc(struct kvm *kvm, struct vgic_its *its, u64 *its_cmd) { u16 coll_id; - u32 target_addr; struct its_collection *collection; bool valid; valid = its_cmd_get_validbit(its_cmd); coll_id = its_cmd_get_collection(its_cmd); - target_addr = its_cmd_get_target_addr(its_cmd); - - if (target_addr >= atomic_read(&kvm->online_vcpus)) - return E_ITS_MAPC_PROCNUM_OOR; if (!valid) { vgic_its_free_collection(its, coll_id); vgic_its_invalidate_cache(kvm); } else { + struct kvm_vcpu *vcpu; + + vcpu = kvm_get_vcpu_by_id(kvm, its_cmd_get_target_addr(its_cmd)); + if (!vcpu) + return E_ITS_MAPC_PROCNUM_OOR; + collection = find_collection(its, coll_id); if (!collection) { @@ -1270,9 +1277,9 @@ static int vgic_its_cmd_handle_mapc(struct kvm *kvm, struct vgic_its *its, coll_id); if (ret) return ret; - collection->target_addr = target_addr; + collection->target_addr = vcpu->vcpu_id; } else { - collection->target_addr = target_addr; + collection->target_addr = vcpu->vcpu_id; update_affinity_collection(kvm, its, collection); } } @@ -1382,7 +1389,7 @@ static int vgic_its_cmd_handle_invall(struct kvm *kvm, struct vgic_its *its, if (!its_is_collection_mapped(collection)) return E_ITS_INVALL_UNMAPPED_COLLECTION; - vcpu = kvm_get_vcpu(kvm, collection->target_addr); + vcpu = collection_to_vcpu(kvm, collection); vgic_its_invall(vcpu); return 0; @@ -1399,23 +1406,21 @@ static int vgic_its_cmd_handle_invall(struct kvm *kvm, struct vgic_its *its, static int vgic_its_cmd_handle_movall(struct kvm *kvm, struct vgic_its *its, u64 *its_cmd) { - u32 target1_addr = its_cmd_get_target_addr(its_cmd); - u32 target2_addr = its_cmd_mask_field(its_cmd, 3, 16, 32); struct kvm_vcpu *vcpu1, *vcpu2; struct vgic_irq *irq; u32 *intids; int irq_count, i; - if (target1_addr >= atomic_read(&kvm->online_vcpus) || - target2_addr >= atomic_read(&kvm->online_vcpus)) + /* We advertise GITS_TYPER.PTA==0, making the address the vcpu ID */ + vcpu1 = kvm_get_vcpu_by_id(kvm, its_cmd_get_target_addr(its_cmd)); + vcpu2 = kvm_get_vcpu_by_id(kvm, its_cmd_mask_field(its_cmd, 3, 16, 32)); + + if (!vcpu1 || !vcpu2) return E_ITS_MOVALL_PROCNUM_OOR; - if (target1_addr == target2_addr) + if (vcpu1 == vcpu2) return 0; - vcpu1 = kvm_get_vcpu(kvm, target1_addr); - vcpu2 = kvm_get_vcpu(kvm, target2_addr); - irq_count = vgic_copy_lpi_list(kvm, vcpu1, &intids); if (irq_count < 0) return irq_count; @@ -2258,7 +2263,7 @@ static int vgic_its_restore_ite(struct vgic_its *its, u32 event_id, return PTR_ERR(ite); if (its_is_collection_mapped(collection)) - vcpu = kvm_get_vcpu(kvm, collection->target_addr); + vcpu = kvm_get_vcpu_by_id(kvm, collection->target_addr); irq = vgic_add_lpi(kvm, lpi_id, vcpu); if (IS_ERR(irq)) { @@ -2573,7 +2578,7 @@ static int vgic_its_restore_cte(struct vgic_its *its, gpa_t gpa, int esz) coll_id = val & KVM_ITS_CTE_ICID_MASK; if (target_addr != COLLECTION_NOT_MAPPED && - target_addr >= atomic_read(&kvm->online_vcpus)) + !kvm_get_vcpu_by_id(kvm, target_addr)) return -EINVAL; collection = find_collection(its, coll_id); From patchwork Wed Sep 27 09:09:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13400417 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D296E810B1 for ; Wed, 27 Sep 2023 09:09:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230346AbjI0JJd (ORCPT ); Wed, 27 Sep 2023 05:09:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55728 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230175AbjI0JJV (ORCPT ); Wed, 27 Sep 2023 05:09:21 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 846AECC for ; Wed, 27 Sep 2023 02:09:19 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 13F36C433C9; Wed, 27 Sep 2023 09:09:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695805759; bh=K+tFfLelIwox1keoN0XKV44DfzHUWb6y5oJTZXZ0HsM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NfZwNgNtyXwGkBUhaFeS33Vfok8E+Gs8AqUG1xNqdzq0xwE4nLhepHf4tGA57HJ7f U3QcmIKLqgZr3DaF6fUQNwuC3cuZtLt1BX/MwJMQFksIMU2n/MhFqVjOsFOVymoyck Q1imQaDlpgu1t/FRftxWhKlDYyBSr8Xjzql3mx/dbTTsN31Pr2GebiBkQjWLOC7mAE swbeEBmbdIVcmSalTFrP4Zz3lEqzw1jugkm14wAnH4x74Q/iozmuo+rOlAOreA3VkN 24DU0henqZmBnBZ+Q2oNqIo40FQGtqFmzEZVl5z4s0yZFj20k86iQRLtm9sZkMt7/7 5DPyCR9CHCv/w== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1qlQXk-00GaLb-QV; Wed, 27 Sep 2023 10:09:17 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Shameerali Kolothum Thodi , Xu Zhao , Eric Auger Subject: [PATCH v3 03/11] KVM: arm64: vgic-v3: Refactor GICv3 SGI generation Date: Wed, 27 Sep 2023 10:09:03 +0100 Message-Id: <20230927090911.3355209-4-maz@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230927090911.3355209-1-maz@kernel.org> References: <20230927090911.3355209-1-maz@kernel.org> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, shameerali.kolothum.thodi@huawei.com, zhaoxu.35@bytedance.com, eric.auger@redhat.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org As we're about to change the way SGIs are sent, start by splitting out some of the basic functionnality: instead of intermingling the broadcast and non-broadcast cases with the actual SGI generation, perform the following cleanups: - move the SGI queuing into its own helper - split the broadcast code from the affinity-driven code - replace the mask/shift combinations with FIELD_GET() - fix the confusion between vcpu_id and vcpu when handling the broadcast case The result is much more readable, and paves the way for further optimisations. Tested-by: Joey Gouly Tested-by: Shameer Kolothum Reviewed-by: Zenghui Yu Signed-off-by: Marc Zyngier --- arch/arm64/kvm/vgic/vgic-mmio-v3.c | 110 ++++++++++++++++------------- 1 file changed, 59 insertions(+), 51 deletions(-) diff --git a/arch/arm64/kvm/vgic/vgic-mmio-v3.c b/arch/arm64/kvm/vgic/vgic-mmio-v3.c index 188d2187eede..88b8d4524854 100644 --- a/arch/arm64/kvm/vgic/vgic-mmio-v3.c +++ b/arch/arm64/kvm/vgic/vgic-mmio-v3.c @@ -1052,6 +1052,38 @@ static int match_mpidr(u64 sgi_aff, u16 sgi_cpu_mask, struct kvm_vcpu *vcpu) ((((reg) & ICC_SGI1R_AFFINITY_## level ##_MASK) \ >> ICC_SGI1R_AFFINITY_## level ##_SHIFT) << MPIDR_LEVEL_SHIFT(level)) +static void vgic_v3_queue_sgi(struct kvm_vcpu *vcpu, u32 sgi, bool allow_group1) +{ + struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, sgi); + unsigned long flags; + + raw_spin_lock_irqsave(&irq->irq_lock, flags); + + /* + * An access targeting Group0 SGIs can only generate + * those, while an access targeting Group1 SGIs can + * generate interrupts of either group. + */ + if (!irq->group || allow_group1) { + if (!irq->hw) { + irq->pending_latch = true; + vgic_queue_irq_unlock(vcpu->kvm, irq, flags); + } else { + /* HW SGI? Ask the GIC to inject it */ + int err; + err = irq_set_irqchip_state(irq->host_irq, + IRQCHIP_STATE_PENDING, + true); + WARN_RATELIMIT(err, "IRQ %d", irq->host_irq); + raw_spin_unlock_irqrestore(&irq->irq_lock, flags); + } + } else { + raw_spin_unlock_irqrestore(&irq->irq_lock, flags); + } + + vgic_put_irq(vcpu->kvm, irq); +} + /** * vgic_v3_dispatch_sgi - handle SGI requests from VCPUs * @vcpu: The VCPU requesting a SGI @@ -1070,19 +1102,30 @@ void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg, bool allow_group1) { struct kvm *kvm = vcpu->kvm; struct kvm_vcpu *c_vcpu; - u16 target_cpus; + unsigned long target_cpus; u64 mpidr; - int sgi; - int vcpu_id = vcpu->vcpu_id; - bool broadcast; - unsigned long c, flags; - - sgi = (reg & ICC_SGI1R_SGI_ID_MASK) >> ICC_SGI1R_SGI_ID_SHIFT; - broadcast = reg & BIT_ULL(ICC_SGI1R_IRQ_ROUTING_MODE_BIT); - target_cpus = (reg & ICC_SGI1R_TARGET_LIST_MASK) >> ICC_SGI1R_TARGET_LIST_SHIFT; + u32 sgi; + unsigned long c; + + sgi = FIELD_GET(ICC_SGI1R_SGI_ID_MASK, reg); + + /* Broadcast */ + if (unlikely(reg & BIT_ULL(ICC_SGI1R_IRQ_ROUTING_MODE_BIT))) { + kvm_for_each_vcpu(c, c_vcpu, kvm) { + /* Don't signal the calling VCPU */ + if (c_vcpu == vcpu) + continue; + + vgic_v3_queue_sgi(c_vcpu, sgi, allow_group1); + } + + return; + } + mpidr = SGI_AFFINITY_LEVEL(reg, 3); mpidr |= SGI_AFFINITY_LEVEL(reg, 2); mpidr |= SGI_AFFINITY_LEVEL(reg, 1); + target_cpus = FIELD_GET(ICC_SGI1R_TARGET_LIST_MASK, reg); /* * We iterate over all VCPUs to find the MPIDRs matching the request. @@ -1091,54 +1134,19 @@ void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg, bool allow_group1) * VCPUs when most of the times we just signal a single VCPU. */ kvm_for_each_vcpu(c, c_vcpu, kvm) { - struct vgic_irq *irq; + int level0; /* Exit early if we have dealt with all requested CPUs */ - if (!broadcast && target_cpus == 0) + if (target_cpus == 0) break; - - /* Don't signal the calling VCPU */ - if (broadcast && c == vcpu_id) + level0 = match_mpidr(mpidr, target_cpus, c_vcpu); + if (level0 == -1) continue; - if (!broadcast) { - int level0; - - level0 = match_mpidr(mpidr, target_cpus, c_vcpu); - if (level0 == -1) - continue; - - /* remove this matching VCPU from the mask */ - target_cpus &= ~BIT(level0); - } + /* remove this matching VCPU from the mask */ + target_cpus &= ~BIT(level0); - irq = vgic_get_irq(vcpu->kvm, c_vcpu, sgi); - - raw_spin_lock_irqsave(&irq->irq_lock, flags); - - /* - * An access targeting Group0 SGIs can only generate - * those, while an access targeting Group1 SGIs can - * generate interrupts of either group. - */ - if (!irq->group || allow_group1) { - if (!irq->hw) { - irq->pending_latch = true; - vgic_queue_irq_unlock(vcpu->kvm, irq, flags); - } else { - /* HW SGI? Ask the GIC to inject it */ - int err; - err = irq_set_irqchip_state(irq->host_irq, - IRQCHIP_STATE_PENDING, - true); - WARN_RATELIMIT(err, "IRQ %d", irq->host_irq); - raw_spin_unlock_irqrestore(&irq->irq_lock, flags); - } - } else { - raw_spin_unlock_irqrestore(&irq->irq_lock, flags); - } - - vgic_put_irq(vcpu->kvm, irq); + vgic_v3_queue_sgi(c_vcpu, sgi, allow_group1); } } From patchwork Wed Sep 27 09:09:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13400412 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8110E810AD for ; Wed, 27 Sep 2023 09:09:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230270AbjI0JJZ (ORCPT ); Wed, 27 Sep 2023 05:09:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55708 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230109AbjI0JJU (ORCPT ); Wed, 27 Sep 2023 05:09:20 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 84101C0 for ; Wed, 27 Sep 2023 02:09:19 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 145D5C433CB; Wed, 27 Sep 2023 09:09:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695805759; bh=PNIfqUPI0AsOj+2xhVTNUahDYUrMKnXrOYsL7+Ia8Kw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TIAOBL7+yFx9V0R42QjK7XCG5c9iCXFHFR1/slB/Knv+mfL4O2NLqoKhBd/JfCDT7 kmtuvJGA5cJTz5Y/ecQ98Y/vd9GHQxepbkhU+z3OaLSvOdHiOCjlq0Acg1Lunlg5sQ pnSOpNJZ/voV7LvGlmd6eOn11frFZPxJqtiMqfa6iKHcI38/ux3Zb3k0b9vZO7WVSI +y8Mj/tThe7yZ/pyhk4D1Y+RSNXhYCnRNmrVW+kkY5UQXafQN1OjIumkIWOhTq3QYi AHX1FNJarGccSvQ8Y2LUMgwnibyKqfocOnaknudLMnZPIO5+qLBdAbgZDCr+pclBc6 RttiEnyVCftuA== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1qlQXl-00GaLb-5U; Wed, 27 Sep 2023 10:09:17 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Shameerali Kolothum Thodi , Xu Zhao , Eric Auger Subject: [PATCH v3 04/11] KVM: arm64: vgic-v2: Use cpuid from userspace as vcpu_id Date: Wed, 27 Sep 2023 10:09:04 +0100 Message-Id: <20230927090911.3355209-5-maz@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230927090911.3355209-1-maz@kernel.org> References: <20230927090911.3355209-1-maz@kernel.org> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, shameerali.kolothum.thodi@huawei.com, zhaoxu.35@bytedance.com, eric.auger@redhat.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When parsing a GICv2 attribute that contains a cpuid, handle this as the vcpu_id, not a vcpu_idx, as userspace cannot really know the mapping between the two. For this, use kvm_get_vcpu_by_id() instead of kvm_get_vcpu(). Take this opportunity to get rid of the pointless check against online_vcpus, which doesn't make much sense either, and switch to FIELD_GET as a way to extract the vcpu_id. Reviewed-by: Zenghui Yu Signed-off-by: Marc Zyngier --- arch/arm64/kvm/vgic/vgic-kvm-device.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/vgic/vgic-kvm-device.c b/arch/arm64/kvm/vgic/vgic-kvm-device.c index 212b73a715c1..c11962f901e0 100644 --- a/arch/arm64/kvm/vgic/vgic-kvm-device.c +++ b/arch/arm64/kvm/vgic/vgic-kvm-device.c @@ -339,13 +339,9 @@ int vgic_v2_parse_attr(struct kvm_device *dev, struct kvm_device_attr *attr, { int cpuid; - cpuid = (attr->attr & KVM_DEV_ARM_VGIC_CPUID_MASK) >> - KVM_DEV_ARM_VGIC_CPUID_SHIFT; + cpuid = FIELD_GET(KVM_DEV_ARM_VGIC_CPUID_MASK, attr->attr); - if (cpuid >= atomic_read(&dev->kvm->online_vcpus)) - return -EINVAL; - - reg_attr->vcpu = kvm_get_vcpu(dev->kvm, cpuid); + reg_attr->vcpu = kvm_get_vcpu_by_id(dev->kvm, cpuid); reg_attr->addr = attr->attr & KVM_DEV_ARM_VGIC_OFFSET_MASK; return 0; From patchwork Wed Sep 27 09:09:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13400416 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C53AFE810AD for ; Wed, 27 Sep 2023 09:09:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230340AbjI0JJc (ORCPT ); Wed, 27 Sep 2023 05:09:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55742 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230194AbjI0JJV (ORCPT ); Wed, 27 Sep 2023 05:09:21 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 517B3DE for ; Wed, 27 Sep 2023 02:09:20 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A91DAC433CC; Wed, 27 Sep 2023 09:09:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695805759; bh=SukBelkNld/+KfqGlJLCx5XPqjafNh6F4NcPqIZXwU4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rghtXBWVKJRbpfGhFi8TLYbKl0gGAIJLlH1aDbrapIl/9QI2mOKO9Y3jVF6wAlVwm brFbg/PHCy8mTmJiTms0x38JG+GNhWb9VWkSTrjH+0zpMLnZ+Hb3j91xYt9expM+Rd cCGjgQb1YXBB6VWw/SxKxtav3duiFtp3LmrEoUN9oGIcdXK60dmctkVOLClbqyrqyA CtuCJwa6BBundM4TI6fVxCsNa5lNoxBxtWZZRpubOOuSxr8w07AL7tCHqKsjPNMi49 r8ukiv1npMIHxPPXLdRIsVn0OQWFql/sDs3fN4pwy6Hl3HmgEHFv2Xmh2vqOtlHIs4 80W0JbWIZpdPQ== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1qlQXl-00GaLb-Cp; Wed, 27 Sep 2023 10:09:17 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Shameerali Kolothum Thodi , Xu Zhao , Eric Auger Subject: [PATCH v3 05/11] KVM: arm64: vgic: Use vcpu_idx for the debug information Date: Wed, 27 Sep 2023 10:09:05 +0100 Message-Id: <20230927090911.3355209-6-maz@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230927090911.3355209-1-maz@kernel.org> References: <20230927090911.3355209-1-maz@kernel.org> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, shameerali.kolothum.thodi@huawei.com, zhaoxu.35@bytedance.com, eric.auger@redhat.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When dumping the debug information, use vcpu_idx instead of vcpu_id, as this is independent of any userspace influence. Reviewed-by: Zenghui Yu Signed-off-by: Marc Zyngier --- arch/arm64/kvm/vgic/vgic-debug.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/vgic/vgic-debug.c b/arch/arm64/kvm/vgic/vgic-debug.c index 07aa0437125a..85606a531dc3 100644 --- a/arch/arm64/kvm/vgic/vgic-debug.c +++ b/arch/arm64/kvm/vgic/vgic-debug.c @@ -166,7 +166,7 @@ static void print_header(struct seq_file *s, struct vgic_irq *irq, if (vcpu) { hdr = "VCPU"; - id = vcpu->vcpu_id; + id = vcpu->vcpu_idx; } seq_printf(s, "\n"); @@ -212,7 +212,7 @@ static void print_irq_state(struct seq_file *s, struct vgic_irq *irq, " %2d " "\n", type, irq->intid, - (irq->target_vcpu) ? irq->target_vcpu->vcpu_id : -1, + (irq->target_vcpu) ? irq->target_vcpu->vcpu_idx : -1, pending, irq->line_level, irq->active, @@ -224,7 +224,7 @@ static void print_irq_state(struct seq_file *s, struct vgic_irq *irq, irq->mpidr, irq->source, irq->priority, - (irq->vcpu) ? irq->vcpu->vcpu_id : -1); + (irq->vcpu) ? irq->vcpu->vcpu_idx : -1); } static int vgic_debug_show(struct seq_file *s, void *v) From patchwork Wed Sep 27 09:09:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13400413 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C201E810B1 for ; Wed, 27 Sep 2023 09:09:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230109AbjI0JJ1 (ORCPT ); Wed, 27 Sep 2023 05:09:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230163AbjI0JJV (ORCPT ); Wed, 27 Sep 2023 05:09:21 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 50337D6 for ; Wed, 27 Sep 2023 02:09:20 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B578BC433D9; Wed, 27 Sep 2023 09:09:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695805759; bh=kU23CEv+nBYaQuElLBiPmIVf/MmSLfglEUFHhMry2OE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZZ+qf0JaaWsBigV0QttbNQRLOaGqMly2id3+fWaRZhxx5e+dJfpnpAOUtK9FZKB2Q adNiindodX1FgdVLvCFCL+ROsssQtqwjmLiIr4ViEGh6zE3BX2L6CBPMGxbGCWU4EQ GFNkp42WOGLK36QRtqMk9F0ANUAsU8ChvzRmtSPXwje3yTg3fdmtLuI1ArsNtIxwp2 fCj77hr0N1hsMdX7HDE4qEqrd5rIRr5m4nSzr1yB41UkGegn2f1x4MzMeo2REa/Uyi 7jqmsuWT9FEjGxHfS8RD2g79vy1RRGoU85bq3ABHK1il6IASjW+lT/+tBKhQw/+TVL twhW9LiC+vXaA== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1qlQXl-00GaLb-KP; Wed, 27 Sep 2023 10:09:17 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Shameerali Kolothum Thodi , Xu Zhao , Eric Auger Subject: [PATCH v3 06/11] KVM: arm64: Use vcpu_idx for invalidation tracking Date: Wed, 27 Sep 2023 10:09:06 +0100 Message-Id: <20230927090911.3355209-7-maz@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230927090911.3355209-1-maz@kernel.org> References: <20230927090911.3355209-1-maz@kernel.org> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, shameerali.kolothum.thodi@huawei.com, zhaoxu.35@bytedance.com, eric.auger@redhat.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org While vcpu_id isn't necessarily a bad choice as an identifier for the currently running vcpu, it is provided by userspace, and there is close to no guarantee that it would be unique. Switch it to vcpu_idx instead, for which we have much stronger guarantees. Reviewed-by: Zenghui Yu Signed-off-by: Marc Zyngier --- arch/arm64/kvm/arm.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index fa21fb15e927..9379a1227501 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -438,9 +438,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) * We might get preempted before the vCPU actually runs, but * over-invalidation doesn't affect correctness. */ - if (*last_ran != vcpu->vcpu_id) { + if (*last_ran != vcpu->vcpu_idx) { kvm_call_hyp(__kvm_flush_cpu_context, mmu); - *last_ran = vcpu->vcpu_id; + *last_ran = vcpu->vcpu_idx; } vcpu->cpu = cpu; From patchwork Wed Sep 27 09:09:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13400418 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84E0DE810B2 for ; Wed, 27 Sep 2023 09:09:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230383AbjI0JJe (ORCPT ); Wed, 27 Sep 2023 05:09:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230196AbjI0JJV (ORCPT ); Wed, 27 Sep 2023 05:09:21 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09F7CB3 for ; Wed, 27 Sep 2023 02:09:21 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EBD11C43391; Wed, 27 Sep 2023 09:09:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695805760; bh=e79+/EAHtMsV+a8mcXU8ZVifInguWMi+6TWA76xpJlY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FHd/igxwSimZUGt4Xr4/oyS1TZkatemT9pp66wOV0Y5xSOAVT9hvU/vzdCJxkbxf9 n16Js+V42T37Cd/GFsM7gB1cq7cB0DuPZ69gXSpTM9DAKXRBNI3A0cbb/oUxRmmoJ6 rNd1Xy6mWWtBU5hUIZN42BtcR2e5MJrJif9RU3kExXVEsiPFhua8qaZdgEfP0aMkRU OgISdcB98I5vuYCPiVxF/tTZX+IAcyNjLtBS2HDx/QlJi+0FKJRUKDOCQGnqqNJpIG lr6XGoox+jVrKlvwH75UyApVOUh6Xiic8+JtGC3i/jYfDebpzXu8VT0dNxzhBH7yJm QQNPM6oKfJEcA== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1qlQXl-00GaLb-SB; Wed, 27 Sep 2023 10:09:17 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Shameerali Kolothum Thodi , Xu Zhao , Eric Auger Subject: [PATCH v3 07/11] KVM: arm64: Simplify kvm_vcpu_get_mpidr_aff() Date: Wed, 27 Sep 2023 10:09:07 +0100 Message-Id: <20230927090911.3355209-8-maz@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230927090911.3355209-1-maz@kernel.org> References: <20230927090911.3355209-1-maz@kernel.org> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, shameerali.kolothum.thodi@huawei.com, zhaoxu.35@bytedance.com, eric.auger@redhat.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org By definition, MPIDR_EL1 cannot be modified by the guest. This means it is pointless to check whether this is loaded on the CPU. Simplify the kvm_vcpu_get_mpidr_aff() helper to directly access the in-memory value. Reviewed-by: Joey Gouly Reviewed-by: Zenghui Yu Tested-by: Joey Gouly Tested-by: Shameer Kolothum Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_emulate.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 3d6725ff0bf6..b66ef77cf49e 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -465,7 +465,7 @@ static inline bool kvm_is_write_fault(struct kvm_vcpu *vcpu) static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu) { - return vcpu_read_sys_reg(vcpu, MPIDR_EL1) & MPIDR_HWID_BITMASK; + return __vcpu_sys_reg(vcpu, MPIDR_EL1) & MPIDR_HWID_BITMASK; } static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu) From patchwork Wed Sep 27 09:09:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13400419 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E93B5E810B1 for ; Wed, 27 Sep 2023 09:09:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230396AbjI0JJg (ORCPT ); Wed, 27 Sep 2023 05:09:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55760 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230262AbjI0JJW (ORCPT ); Wed, 27 Sep 2023 05:09:22 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1077AE4 for ; Wed, 27 Sep 2023 02:09:21 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F338EC433CD; Wed, 27 Sep 2023 09:09:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695805760; bh=Q6YmAgCzAkuN6dZZT3kGjYw87vpIbmyT14KHiiho/d4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hbNbaSmloROzzFU5zeIiMxUOy4P5qCy0oApnA4+BkZcOA9YoCvzX2p36XP4fWn9UA bveAlMT5A0tyc8vEb2mAK4/GjPCtRJrcXarvQrguexGB409Me7qG2stqrjk34s8Lnr FldtCIMeRuiytbUbyK+AiYC7IW15HuYY748CR+yAgmhXCZ72UnurptiwvdnWXNm0w3 dQIC2fUz3vj8sdJmFMrlnK6xsLT8Vrn1F4iR5S3B8dgZVSyw2Axev5YIMVcPJwCT5L rQuLZP19KuIsJScM5ghCcaIwLGiLOrkwwTojaaUQCC6bxyq2rRUF5z19fqRnXUCvFJ UaMJVBICCxeBw== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1qlQXm-00GaLb-3X; Wed, 27 Sep 2023 10:09:18 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Shameerali Kolothum Thodi , Xu Zhao , Eric Auger Subject: [PATCH v3 08/11] KVM: arm64: Build MPIDR to vcpu index cache at runtime Date: Wed, 27 Sep 2023 10:09:08 +0100 Message-Id: <20230927090911.3355209-9-maz@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230927090911.3355209-1-maz@kernel.org> References: <20230927090911.3355209-1-maz@kernel.org> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, shameerali.kolothum.thodi@huawei.com, zhaoxu.35@bytedance.com, eric.auger@redhat.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The MPIDR_EL1 register contains a unique value that identifies the CPU. The only problem with it is that it is stupidly large (32 bits, once the useless stuff is removed). Trying to obtain a vcpu from an MPIDR value is a fairly common, yet costly operation: we iterate over all the vcpus until we find the correct one. While this is cheap for small VMs, it is pretty expensive on large ones, specially if you are trying to get to the one that's at the end of the list... In order to help with this, it is important to realise that the MPIDR values are actually structured, and that implementations tend to use a small number of significant bits in the 32bit space. We can use this fact to our advantage by computing a small hash table that uses the "compression" of the significant MPIDR bits as an index, giving us the vcpu index as a result. Given that the MPIDR values can be supplied by userspace, and that an evil VMM could decide to make *all* bits significant, resulting in a 4G-entry table, we only use this method if the resulting table fits in a single page. Otherwise, we fallback to the good old iterative method. Nothing uses that table just yet, but keep your eyes peeled. Reviewed-by: Joey Gouly Reviewed-by: Zenghui Yu Tested-by: Joey Gouly Tested-by: Shameer Kolothum Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_host.h | 28 ++++++++++++++++ arch/arm64/kvm/arm.c | 54 +++++++++++++++++++++++++++++++ 2 files changed, 82 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index af06ccb7ee34..4b5a57a60b6a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -202,6 +202,31 @@ struct kvm_protected_vm { struct kvm_hyp_memcache teardown_mc; }; +struct kvm_mpidr_data { + u64 mpidr_mask; + DECLARE_FLEX_ARRAY(u16, cmpidr_to_idx); +}; + +static inline u16 kvm_mpidr_index(struct kvm_mpidr_data *data, u64 mpidr) +{ + unsigned long mask = data->mpidr_mask; + u64 aff = mpidr & MPIDR_HWID_BITMASK; + int nbits, bit, bit_idx = 0; + u16 index = 0; + + /* + * If this looks like RISC-V's BEXT or x86's PEXT + * instructions, it isn't by accident. + */ + nbits = fls(mask); + for_each_set_bit(bit, &mask, nbits) { + index |= (aff & BIT(bit)) >> (bit - bit_idx); + bit_idx++; + } + + return index; +} + struct kvm_arch { struct kvm_s2_mmu mmu; @@ -248,6 +273,9 @@ struct kvm_arch { /* VM-wide vCPU feature set */ DECLARE_BITMAP(vcpu_features, KVM_VCPU_MAX_FEATURES); + /* MPIDR to vcpu index mapping, optional */ + struct kvm_mpidr_data *mpidr_data; + /* * VM-wide PMU filter, implemented as a bitmap and big enough for * up to 2^10 events (ARMv8.0) or 2^16 events (ARMv8.1+). diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 9379a1227501..b02e28f76083 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -205,6 +205,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm) if (is_protected_kvm_enabled()) pkvm_destroy_hyp_vm(kvm); + kfree(kvm->arch.mpidr_data); kvm_destroy_vcpus(kvm); kvm_unshare_hyp(kvm, kvm + 1); @@ -578,6 +579,57 @@ static int kvm_vcpu_initialized(struct kvm_vcpu *vcpu) return vcpu_get_flag(vcpu, VCPU_INITIALIZED); } +static void kvm_init_mpidr_data(struct kvm *kvm) +{ + struct kvm_mpidr_data *data = NULL; + unsigned long c, mask, nr_entries; + u64 aff_set = 0, aff_clr = ~0UL; + struct kvm_vcpu *vcpu; + + mutex_lock(&kvm->arch.config_lock); + + if (kvm->arch.mpidr_data || atomic_read(&kvm->online_vcpus) == 1) + goto out; + + kvm_for_each_vcpu(c, vcpu, kvm) { + u64 aff = kvm_vcpu_get_mpidr_aff(vcpu); + aff_set |= aff; + aff_clr &= aff; + } + + /* + * A significant bit can be either 0 or 1, and will only appear in + * aff_set. Use aff_clr to weed out the useless stuff. + */ + mask = aff_set ^ aff_clr; + nr_entries = BIT_ULL(hweight_long(mask)); + + /* + * Don't let userspace fool us. If we need more than a single page + * to describe the compressed MPIDR array, just fall back to the + * iterative method. Single vcpu VMs do not need this either. + */ + if (struct_size(data, cmpidr_to_idx, nr_entries) <= PAGE_SIZE) + data = kzalloc(struct_size(data, cmpidr_to_idx, nr_entries), + GFP_KERNEL_ACCOUNT); + + if (!data) + goto out; + + data->mpidr_mask = mask; + + kvm_for_each_vcpu(c, vcpu, kvm) { + u64 aff = kvm_vcpu_get_mpidr_aff(vcpu); + u16 index = kvm_mpidr_index(data, aff); + + data->cmpidr_to_idx[index] = c; + } + + kvm->arch.mpidr_data = data; +out: + mutex_unlock(&kvm->arch.config_lock); +} + /* * Handle both the initialisation that is being done when the vcpu is * run for the first time, as well as the updates that must be @@ -601,6 +653,8 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu) if (likely(vcpu_has_run_once(vcpu))) return 0; + kvm_init_mpidr_data(kvm); + kvm_arm_vcpu_init_debug(vcpu); if (likely(irqchip_in_kernel(kvm))) { From patchwork Wed Sep 27 09:09:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13400421 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66B59E810B1 for ; Wed, 27 Sep 2023 09:09:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230398AbjI0JJk (ORCPT ); Wed, 27 Sep 2023 05:09:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55732 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230227AbjI0JJW (ORCPT ); Wed, 27 Sep 2023 05:09:22 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 282C1E5 for ; Wed, 27 Sep 2023 02:09:21 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 37D10C433C8; Wed, 27 Sep 2023 09:09:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695805760; bh=v9+KEyVmL3Rbtezk0wEMGxYSOO0hSxuVwlPLKtF01uA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HiM/R4s+zfsdAtSNUPTVLifroK77J5ahCZ/K/BeQX0GtwbFapEFC5yu0NYQzV9Jd7 lP4jR1DCJVohzQEvV3ck8oRJq3QHLgCJDbyQXJ8nazFj3R9DpKhiJe+FaPrEsKbOZf CbHTghi6WnGoB974I5qrO2ZQf4GHyT/zOn4Qr6SZkRTuQj8SE6QVSLTNeR/vLHK5s+ cMK6wME8WnxJ1wu6Eg2U9CC9TPZvgFF9n5QHbSaBair5nPrkXXc73Edpb+K4EYYR6z uYusCYgFfpgybqxkfbbCb0RR+sxOSkUpFizlk8pLDlpGV4T4YFJIyQLqIHkM7yuZFF XYZC1fhu9OBhw== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1qlQXm-00GaLb-9d; Wed, 27 Sep 2023 10:09:18 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Shameerali Kolothum Thodi , Xu Zhao , Eric Auger Subject: [PATCH v3 09/11] KVM: arm64: Fast-track kvm_mpidr_to_vcpu() when mpidr_data is available Date: Wed, 27 Sep 2023 10:09:09 +0100 Message-Id: <20230927090911.3355209-10-maz@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230927090911.3355209-1-maz@kernel.org> References: <20230927090911.3355209-1-maz@kernel.org> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, shameerali.kolothum.thodi@huawei.com, zhaoxu.35@bytedance.com, eric.auger@redhat.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If our fancy little table is present when calling kvm_mpidr_to_vcpu(), use it to recover the corresponding vcpu. Reviewed-by: Joey Gouly Reviewed-by: Zenghui Yu Tested-by: Joey Gouly Tested-by: Shameer Kolothum Signed-off-by: Marc Zyngier --- arch/arm64/kvm/arm.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index b02e28f76083..e3d667feecaf 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -2388,6 +2388,18 @@ struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr) unsigned long i; mpidr &= MPIDR_HWID_BITMASK; + + if (kvm->arch.mpidr_data) { + u16 idx = kvm_mpidr_index(kvm->arch.mpidr_data, mpidr); + + vcpu = kvm_get_vcpu(kvm, + kvm->arch.mpidr_data->cmpidr_to_idx[idx]); + if (mpidr != kvm_vcpu_get_mpidr_aff(vcpu)) + vcpu = NULL; + + return vcpu; + } + kvm_for_each_vcpu(i, vcpu, kvm) { if (mpidr == kvm_vcpu_get_mpidr_aff(vcpu)) return vcpu; From patchwork Wed Sep 27 09:09:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13400420 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5587E810AD for ; Wed, 27 Sep 2023 09:09:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230388AbjI0JJi (ORCPT ); Wed, 27 Sep 2023 05:09:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55742 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230203AbjI0JJW (ORCPT ); Wed, 27 Sep 2023 05:09:22 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EF27A97 for ; Wed, 27 Sep 2023 02:09:20 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7DFF8C43397; Wed, 27 Sep 2023 09:09:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695805760; bh=r/ljrcWkmI6VC0BXOTSPtSCAAHr+D4Kq1jEXzGiU4ho=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GRtfZIjuOV33tPHUdDs6thpAAbWiJmdK82R7X2xvsCZOuBk2fztAn9nZfeGVjHBYL DFR/7HDuaoU2Cwa2rRVHe1BAkXDOxU/n+Za21ZH0b6O6imC0cVVXzfYJn61meNGl36 iypGfJIyjWQ76DcCn8UHY6Sav5J0qWiXbYBcixzYYOTOEC3LqGuC9NIW6ZC8myvPFo l5oWvMfdy1G9p8RYJV/8RQDSVTFp6PTZdAXgzpNU0ItDxz80M9rY3JA46pjam8WXT1 V/hbqd3uRsDiIdsmNhwRjQj7vSxIn2sIRi1awR1K7ndUUktVSZpwtuO4pz+5et2T9W iDcg2mMekEzCw== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1qlQXm-00GaLb-Fd; Wed, 27 Sep 2023 10:09:18 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Shameerali Kolothum Thodi , Xu Zhao , Eric Auger Subject: [PATCH v3 10/11] KVM: arm64: vgic-v3: Optimize affinity-based SGI injection Date: Wed, 27 Sep 2023 10:09:10 +0100 Message-Id: <20230927090911.3355209-11-maz@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230927090911.3355209-1-maz@kernel.org> References: <20230927090911.3355209-1-maz@kernel.org> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, shameerali.kolothum.thodi@huawei.com, zhaoxu.35@bytedance.com, eric.auger@redhat.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Our affinity-based SGI injection code is a bit daft. We iterate over all the CPUs trying to match the set of affinities that the guest is trying to reach, leading to some very bad behaviours if the selected targets are at a high vcpu index. Instead, we can now use the fact that we have an optimised MPIDR to vcpu mapping, and only look at the relevant values. This results in a much faster injection for large VMs, and in a near constant time, irrespective of the position in the vcpu index space. As a bonus, this is mostly deleting a lot of hard-to-read code. Nobody will complain about that. Suggested-by: Xu Zhao Tested-by: Joey Gouly Tested-by: Shameer Kolothum Reviewed-by: Zenghui Yu Signed-off-by: Marc Zyngier --- arch/arm64/kvm/vgic/vgic-mmio-v3.c | 64 +++++------------------------- 1 file changed, 11 insertions(+), 53 deletions(-) diff --git a/arch/arm64/kvm/vgic/vgic-mmio-v3.c b/arch/arm64/kvm/vgic/vgic-mmio-v3.c index 88b8d4524854..89117ba2528a 100644 --- a/arch/arm64/kvm/vgic/vgic-mmio-v3.c +++ b/arch/arm64/kvm/vgic/vgic-mmio-v3.c @@ -1013,35 +1013,6 @@ int vgic_v3_has_attr_regs(struct kvm_device *dev, struct kvm_device_attr *attr) return 0; } -/* - * Compare a given affinity (level 1-3 and a level 0 mask, from the SGI - * generation register ICC_SGI1R_EL1) with a given VCPU. - * If the VCPU's MPIDR matches, return the level0 affinity, otherwise - * return -1. - */ -static int match_mpidr(u64 sgi_aff, u16 sgi_cpu_mask, struct kvm_vcpu *vcpu) -{ - unsigned long affinity; - int level0; - - /* - * Split the current VCPU's MPIDR into affinity level 0 and the - * rest as this is what we have to compare against. - */ - affinity = kvm_vcpu_get_mpidr_aff(vcpu); - level0 = MPIDR_AFFINITY_LEVEL(affinity, 0); - affinity &= ~MPIDR_LEVEL_MASK; - - /* bail out if the upper three levels don't match */ - if (sgi_aff != affinity) - return -1; - - /* Is this VCPU's bit set in the mask ? */ - if (!(sgi_cpu_mask & BIT(level0))) - return -1; - - return level0; -} /* * The ICC_SGI* registers encode the affinity differently from the MPIDR, @@ -1094,9 +1065,11 @@ static void vgic_v3_queue_sgi(struct kvm_vcpu *vcpu, u32 sgi, bool allow_group1) * This will trap in sys_regs.c and call this function. * This ICC_SGI1R_EL1 register contains the upper three affinity levels of the * target processors as well as a bitmask of 16 Aff0 CPUs. - * If the interrupt routing mode bit is not set, we iterate over all VCPUs to - * check for matching ones. If this bit is set, we signal all, but not the - * calling VCPU. + * + * If the interrupt routing mode bit is not set, we iterate over the Aff0 + * bits and signal the VCPUs matching the provided Aff{3,2,1}. + * + * If this bit is set, we signal all, but not the calling VCPU. */ void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg, bool allow_group1) { @@ -1104,7 +1077,7 @@ void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg, bool allow_group1) struct kvm_vcpu *c_vcpu; unsigned long target_cpus; u64 mpidr; - u32 sgi; + u32 sgi, aff0; unsigned long c; sgi = FIELD_GET(ICC_SGI1R_SGI_ID_MASK, reg); @@ -1122,31 +1095,16 @@ void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg, bool allow_group1) return; } + /* We iterate over affinities to find the corresponding vcpus */ mpidr = SGI_AFFINITY_LEVEL(reg, 3); mpidr |= SGI_AFFINITY_LEVEL(reg, 2); mpidr |= SGI_AFFINITY_LEVEL(reg, 1); target_cpus = FIELD_GET(ICC_SGI1R_TARGET_LIST_MASK, reg); - /* - * We iterate over all VCPUs to find the MPIDRs matching the request. - * If we have handled one CPU, we clear its bit to detect early - * if we are already finished. This avoids iterating through all - * VCPUs when most of the times we just signal a single VCPU. - */ - kvm_for_each_vcpu(c, c_vcpu, kvm) { - int level0; - - /* Exit early if we have dealt with all requested CPUs */ - if (target_cpus == 0) - break; - level0 = match_mpidr(mpidr, target_cpus, c_vcpu); - if (level0 == -1) - continue; - - /* remove this matching VCPU from the mask */ - target_cpus &= ~BIT(level0); - - vgic_v3_queue_sgi(c_vcpu, sgi, allow_group1); + for_each_set_bit(aff0, &target_cpus, hweight_long(ICC_SGI1R_TARGET_LIST_MASK)) { + c_vcpu = kvm_mpidr_to_vcpu(kvm, mpidr | aff0); + if (c_vcpu) + vgic_v3_queue_sgi(c_vcpu, sgi, allow_group1); } } From patchwork Wed Sep 27 09:09:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13400422 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 579B2E810B1 for ; Wed, 27 Sep 2023 09:09:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230410AbjI0JJm (ORCPT ); Wed, 27 Sep 2023 05:09:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230263AbjI0JJW (ORCPT ); Wed, 27 Sep 2023 05:09:22 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29F1FE6 for ; Wed, 27 Sep 2023 02:09:21 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B2BBDC43395; Wed, 27 Sep 2023 09:09:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1695805760; bh=JypXsHJ4InnnDaRcHTXpCOmOh/qZk14zMN+csAv3sVQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=L0sOSEmCli0cv5bClWusW9jpAKHZEyVWze2Jj+KM4gc5M5ew3sgoPGWnK95US7FtV wl98R6DE7BJNax8X9JpwkvzqB0MhoG+MU5qdjDAdjqVzlnZz7xU9ELRPzHhlFOJSpS Gvgvv7nvXtCTibHiFR7URTIGzFK+uJxl2416Wd4gVAEZ4hYvvu4dxzoeDO52YVRtiA FJxHpSiQnemXPvRn65IbInEnzBSPRGZxqy1ij5PjjSej7fHRDdorJjMFBHMs6eqKb0 B8sTbWFtQTHSoZmrLolKwkTKdaVV7uYza6fh96tv8c84Xmpv+kX7NpHEHCYm3q0FBp xUqmwK0P44V2A== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1qlQXm-00GaLb-ND; Wed, 27 Sep 2023 10:09:18 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Shameerali Kolothum Thodi , Xu Zhao , Eric Auger Subject: [PATCH v3 11/11] KVM: arm64: Clarify the ordering requirements for vcpu/RD creation Date: Wed, 27 Sep 2023 10:09:11 +0100 Message-Id: <20230927090911.3355209-12-maz@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230927090911.3355209-1-maz@kernel.org> References: <20230927090911.3355209-1-maz@kernel.org> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, shameerali.kolothum.thodi@huawei.com, zhaoxu.35@bytedance.com, eric.auger@redhat.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org It goes without saying, but it is probably better to spell it out: If userspace tries to restore and VM, but creates vcpus and/or RDs in a different order, the vcpu/RD mapping will be different. Yes, our API is an ugly piece of crap and I can't believe that we missed this. If we want to relax the above, we'll need to define a new userspace API that allows the mapping to be specified, rather than relying on the kernel to perform the mapping on its own. Reviewed-by: Zenghui Yu Signed-off-by: Marc Zyngier --- Documentation/virt/kvm/devices/arm-vgic-v3.rst | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/Documentation/virt/kvm/devices/arm-vgic-v3.rst b/Documentation/virt/kvm/devices/arm-vgic-v3.rst index 51e5e5762571..5817edb4e046 100644 --- a/Documentation/virt/kvm/devices/arm-vgic-v3.rst +++ b/Documentation/virt/kvm/devices/arm-vgic-v3.rst @@ -59,6 +59,13 @@ Groups: It is invalid to mix calls with KVM_VGIC_V3_ADDR_TYPE_REDIST and KVM_VGIC_V3_ADDR_TYPE_REDIST_REGION attributes. + Note that to obtain reproducible results (the same VCPU being associated + with the same redistributor across a save/restore operation), VCPU creation + order, redistributor region creation order as well as the respective + interleaves of VCPU and region creation MUST be preserved. Any change in + either ordering may result in a different vcpu_id/redistributor association, + resulting in a VM that will fail to run at restore time. + Errors: ======= =============================================================