From patchwork Sat Mar 13 08:38:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Shenming Lu X-Patchwork-Id: 12136645 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00997C433DB for ; Sat, 13 Mar 2021 08:42:04 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 85FDA64F16 for ; Sat, 13 Mar 2021 08:42:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 85FDA64F16 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=wAaTO230zSOCOB4DtFOtw+ivnz4ojYKMPfhp3jDJqvg=; b=EXGM177CxESeaJM4yoCQEKqcV rlh3GB+gevw5Aw0+k5B5vhie1UjpsiUICMQgTTGeVZjvQy9x/aI6iLLsT1S2mcvw7ClCGrdlWRZkd 6OJIvu7c02z/+We5j3AMwSHZAav9VYl0FGi/OTUFB5Ltdnv2OCg4xMJ0mI6J+qLDoRDNUltFNys7R WZAdkyjpMPMWVRH++3gs3xSMeMECUPIstO5Nf0V8gbzuX1QHh4924sCJHFto8Pb0vFmdbeJQc1MaS 2ydeFQuZAQg0mzeJroY8x2ao61E33rZecz2utuSQzCEa4zOj+BDvFYmWNAeL5ar5YaCHbACmmYC6m WKXvPyybQ==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lKzp2-00D1YV-9k; Sat, 13 Mar 2021 08:40:32 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lKzoV-00D1SR-4k for linux-arm-kernel@lists.infradead.org; Sat, 13 Mar 2021 08:40:06 +0000 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4DyGKT1Vswz8wxp; Sat, 13 Mar 2021 16:38:05 +0800 (CST) Received: from DESKTOP-7FEPK9S.china.huawei.com (10.174.184.135) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.498.0; Sat, 13 Mar 2021 16:39:45 +0800 From: Shenming Lu To: Marc Zyngier , Eric Auger , "Will Deacon" , , , , CC: Alex Williamson , Cornelia Huck , Lorenzo Pieralisi , , , Subject: [PATCH v4 5/6] KVM: arm64: GICv4.1: Restore VLPI pending state to physical side Date: Sat, 13 Mar 2021 16:38:59 +0800 Message-ID: <20210313083900.234-6-lushenming@huawei.com> X-Mailer: git-send-email 2.27.0.windows.1 In-Reply-To: <20210313083900.234-1-lushenming@huawei.com> References: <20210313083900.234-1-lushenming@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.184.135] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210313_084005_208226_68B21D5C X-CRM114-Status: UNSURE ( 7.23 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Zenghui Yu When setting the forwarding path of a VLPI (switch to the HW mode), we can also transfer the pending state from irq->pending_latch to VPT (especially in migration, the pending states of VLPIs are restored into kvm’s vgic first). And we currently send "INT+VSYNC" to trigger a VLPI to pending. Signed-off-by: Zenghui Yu Signed-off-by: Shenming Lu --- arch/arm64/kvm/vgic/vgic-v4.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/arch/arm64/kvm/vgic/vgic-v4.c b/arch/arm64/kvm/vgic/vgic-v4.c index ac029ba3d337..3b82ab80c2f3 100644 --- a/arch/arm64/kvm/vgic/vgic-v4.c +++ b/arch/arm64/kvm/vgic/vgic-v4.c @@ -449,6 +449,24 @@ int kvm_vgic_v4_set_forwarding(struct kvm *kvm, int virq, irq->host_irq = virq; atomic_inc(&map.vpe->vlpi_count); + /* Transfer pending state */ + if (irq->pending_latch) { + unsigned long flags; + + ret = irq_set_irqchip_state(irq->host_irq, + IRQCHIP_STATE_PENDING, + irq->pending_latch); + WARN_RATELIMIT(ret, "IRQ %d", irq->host_irq); + + /* + * Clear pending_latch and communicate this state + * change via vgic_queue_irq_unlock. + */ + raw_spin_lock_irqsave(&irq->irq_lock, flags); + irq->pending_latch = false; + vgic_queue_irq_unlock(kvm, irq, flags); + } + out: mutex_unlock(&its->its_lock); return ret;