From patchwork Wed Jan 27 12:13:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Shenming Lu X-Patchwork-Id: 12049909 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D4C7C433E0 for ; Wed, 27 Jan 2021 12:16:15 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B209320789 for ; Wed, 27 Jan 2021 12:16:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B209320789 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=lEMFqacydALbm8kQPxGHixyb1rBPmCLnxOJnWifA3+4=; b=MPGCsnpnZjsw2xWtZN6kyb8Ki JGaq4mPpr1YvuCLFxXczPYEvxzgl4f216Cg1qyTcSZNpt7BtlSdVFWE9kc9bsKfV3Ltjtkei/sEZ4 iD1OsAkZ6yZ3H0CepsEYQQEAIshHb0hsesQlR6X2QsyN8U8O0DaeMMHt3oHccBZnEIQLLpxolPWKw Pwrf4BON/du9WWe/FBgqNDrRspSfjImrIc1HYdhEoyGzcfHJLwFmNN81CP2HJCv0GCEspMNWzGCdh jXW8c+gdzqEWWqiLdX/ru93W/xRLswfaFyxVyzyDzK4K3j7p35PN1GtN1+vEehozSNAgrbTtgsB6K uXzURDDOg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l4jiP-0006Gu-GO; Wed, 27 Jan 2021 12:14:29 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l4jiC-0006BM-4n for linux-arm-kernel@lists.infradead.org; Wed, 27 Jan 2021 12:14:17 +0000 Received: from DGGEMS404-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DQjDF3NKpzjDLd; Wed, 27 Jan 2021 20:13:01 +0800 (CST) Received: from DESKTOP-7FEPK9S.china.huawei.com (10.174.186.182) by DGGEMS404-HUB.china.huawei.com (10.3.19.204) with Microsoft SMTP Server id 14.3.498.0; Wed, 27 Jan 2021 20:14:04 +0800 From: Shenming Lu To: Marc Zyngier , Eric Auger , "Will Deacon" , , , , Subject: [PATCH v3 2/4] KVM: arm64: GICv4.1: Try to save hw pending state in save_pending_tables Date: Wed, 27 Jan 2021 20:13:35 +0800 Message-ID: <20210127121337.1092-3-lushenming@huawei.com> X-Mailer: git-send-email 2.27.0.windows.1 In-Reply-To: <20210127121337.1092-1-lushenming@huawei.com> References: <20210127121337.1092-1-lushenming@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.186.182] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210127_071416_707837_A80A66AC X-CRM114-Status: GOOD ( 16.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Lorenzo Pieralisi , Cornelia Huck , lushenming@huawei.com, Alex Williamson , yuzenghui@huawei.com, wanghaibin.wang@huawei.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org After pausing all vCPUs and devices capable of interrupting, in order to save the information of all interrupts, besides flushing the pending states in kvm’s vgic, we also try to flush the states of VLPIs in the virtual pending tables into guest RAM, but we need to have GICv4.1 and safely unmap the vPEs first. As for the saving of VSGIs, which needs the vPEs to be mapped and might conflict with the saving of VLPIs, but since we will map the vPEs back at the end of save_pending_tables and both savings require the kvm->lock to be held (only happen serially), it will work fine. Signed-off-by: Shenming Lu --- arch/arm64/kvm/vgic/vgic-v3.c | 61 +++++++++++++++++++++++++++++++---- 1 file changed, 55 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c index 52915b342351..06b1162b7a0a 100644 --- a/arch/arm64/kvm/vgic/vgic-v3.c +++ b/arch/arm64/kvm/vgic/vgic-v3.c @@ -1,6 +1,8 @@ // SPDX-License-Identifier: GPL-2.0-only #include +#include +#include #include #include #include @@ -356,6 +358,32 @@ int vgic_v3_lpi_sync_pending_status(struct kvm *kvm, struct vgic_irq *irq) return 0; } +/* + * The deactivation of the doorbell interrupt will trigger the + * unmapping of the associated vPE. + */ +static void unmap_all_vpes(struct vgic_dist *dist) +{ + struct irq_desc *desc; + int i; + + for (i = 0; i < dist->its_vm.nr_vpes; i++) { + desc = irq_to_desc(dist->its_vm.vpes[i]->irq); + irq_domain_deactivate_irq(irq_desc_get_irq_data(desc)); + } +} + +static void map_all_vpes(struct vgic_dist *dist) +{ + struct irq_desc *desc; + int i; + + for (i = 0; i < dist->its_vm.nr_vpes; i++) { + desc = irq_to_desc(dist->its_vm.vpes[i]->irq); + irq_domain_activate_irq(irq_desc_get_irq_data(desc), false); + } +} + /** * vgic_v3_save_pending_tables - Save the pending tables into guest RAM * kvm lock and all vcpu lock must be held @@ -365,14 +393,26 @@ int vgic_v3_save_pending_tables(struct kvm *kvm) struct vgic_dist *dist = &kvm->arch.vgic; struct vgic_irq *irq; gpa_t last_ptr = ~(gpa_t)0; - int ret; + bool vlpi_avail = false; + int ret = 0; u8 val; + /* + * As a preparation for getting any VLPI states. + * The vgic initialized check ensures that the allocation and + * enabling of the doorbells have already been done. + */ + if (kvm_vgic_global_state.has_gicv4_1 && !WARN_ON(!vgic_initialized(kvm))) { + unmap_all_vpes(dist); + vlpi_avail = true; + } + list_for_each_entry(irq, &dist->lpi_list_head, lpi_list) { int byte_offset, bit_nr; struct kvm_vcpu *vcpu; gpa_t pendbase, ptr; bool stored; + bool is_pending = irq->pending_latch; vcpu = irq->target_vcpu; if (!vcpu) @@ -387,24 +427,33 @@ int vgic_v3_save_pending_tables(struct kvm *kvm) if (ptr != last_ptr) { ret = kvm_read_guest_lock(kvm, ptr, &val, 1); if (ret) - return ret; + goto out; last_ptr = ptr; } stored = val & (1U << bit_nr); - if (stored == irq->pending_latch) + + if (irq->hw && vlpi_avail) + vgic_v4_get_vlpi_state(irq, &is_pending); + + if (stored == is_pending) continue; - if (irq->pending_latch) + if (is_pending) val |= 1 << bit_nr; else val &= ~(1 << bit_nr); ret = kvm_write_guest_lock(kvm, ptr, &val, 1); if (ret) - return ret; + goto out; } - return 0; + +out: + if (vlpi_avail) + map_all_vpes(dist); + + return ret; } /**