From patchwork Mon Mar 22 06:01:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Shenming Lu X-Patchwork-Id: 12153789 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB948C433E6 for ; Mon, 22 Mar 2021 06:05:44 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7B5B76196C for ; Mon, 22 Mar 2021 06:05:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7B5B76196C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=05ChB2MYaEM/pNwryWsBbE1PbSn6OPzr9hRbiSPJvVc=; b=IHdMq4l3rZ+XIY6QybIy3Z2dP Dj4VDkERnsy6LgDn6EERhwiYtYSuoMexRtAvbTjmB2/ZzDdypauTsO8t9BV+1m55lEq/VHPgVfiZK A2LPiZmthgL/qkk2HLn8trhUmP/JBhCNlVVPTkGH5L4HxGgA/VZsZVDqG9kIMlTUBmvpl1CTQncJx U8J4lk1wYk0wMjX+j55kSqvOKoqV1d7Wp9uQVpkH87pbCXPnhSRwVK87ZJpLQ6ZeK9Kwg5DZ7MqHh MY16A1Me6nZtU+7B+UBhbKwKUywdZAo6KGUAaDy+5sWx80LRicWPIPRMJs0LG7/sg80aA3usb7Ygs VE/KjU23w==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lODf2-00Ay7w-Uz; Mon, 22 Mar 2021 06:03:33 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lODeC-00Ay15-Ay for linux-arm-kernel@lists.infradead.org; Mon, 22 Mar 2021 06:02:44 +0000 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4F3kP41QwnzNq3Y; Mon, 22 Mar 2021 14:00:08 +0800 (CST) Received: from DESKTOP-7FEPK9S.china.huawei.com (10.174.184.135) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.498.0; Mon, 22 Mar 2021 14:02:31 +0800 From: Shenming Lu To: Marc Zyngier , Eric Auger , "Will Deacon" , , , , CC: Alex Williamson , Cornelia Huck , Lorenzo Pieralisi , , , Subject: [PATCH v5 5/6] KVM: arm64: GICv4.1: Restore VLPI pending state to physical side Date: Mon, 22 Mar 2021 14:01:57 +0800 Message-ID: <20210322060158.1584-6-lushenming@huawei.com> X-Mailer: git-send-email 2.27.0.windows.1 In-Reply-To: <20210322060158.1584-1-lushenming@huawei.com> References: <20210322060158.1584-1-lushenming@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.184.135] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210322_060241_617809_67BF403E X-CRM114-Status: UNSURE ( 8.34 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Zenghui Yu When setting the forwarding path of a VLPI (switch to the HW mode), we can also transfer the pending state from irq->pending_latch to VPT (especially in migration, the pending states of VLPIs are restored into kvm’s vgic first). And we currently send "INT+VSYNC" to trigger a VLPI to pending. Signed-off-by: Zenghui Yu Signed-off-by: Shenming Lu --- arch/arm64/kvm/vgic/vgic-v4.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/arch/arm64/kvm/vgic/vgic-v4.c b/arch/arm64/kvm/vgic/vgic-v4.c index ac029ba3d337..c1845d8f5f7e 100644 --- a/arch/arm64/kvm/vgic/vgic-v4.c +++ b/arch/arm64/kvm/vgic/vgic-v4.c @@ -404,6 +404,7 @@ int kvm_vgic_v4_set_forwarding(struct kvm *kvm, int virq, struct vgic_its *its; struct vgic_irq *irq; struct its_vlpi_map map; + unsigned long flags; int ret; if (!vgic_supports_direct_msis(kvm)) @@ -449,6 +450,24 @@ int kvm_vgic_v4_set_forwarding(struct kvm *kvm, int virq, irq->host_irq = virq; atomic_inc(&map.vpe->vlpi_count); + /* Transfer pending state */ + raw_spin_lock_irqsave(&irq->irq_lock, flags); + if (irq->pending_latch) { + ret = irq_set_irqchip_state(irq->host_irq, + IRQCHIP_STATE_PENDING, + irq->pending_latch); + WARN_RATELIMIT(ret, "IRQ %d", irq->host_irq); + + /* + * Clear pending_latch and communicate this state + * change via vgic_queue_irq_unlock. + */ + irq->pending_latch = false; + vgic_queue_irq_unlock(kvm, irq, flags); + } else { + raw_spin_unlock_irqrestore(&irq->irq_lock, flags); + } + out: mutex_unlock(&its->its_lock); return ret;