From patchwork Fri Jun 4 06:48:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shenming Lu X-Patchwork-Id: 12298827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10C26C47083 for ; Fri, 4 Jun 2021 06:50:47 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D72DC6140F for ; Fri, 4 Jun 2021 06:50:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D72DC6140F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-ID:Date:Subject:CC :To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=/ee4dVbQinKeO3KPh8xg1lKynWjVTSD9c5bwFlKoPGw=; b=ApxIPfsM1OLK/Y lKJRU1t9yKcyAh0iSiKTHxMvRxeFw2uJiFpVyJmvXLuODRUL7Q4mPGUZkgHW+rtyuH3sUF43e1v/c cRAzH3b1zQ1ql4DEuwac5UDoRjgtTstx9u7XBjVXJNKpJhNJztqYQkK/TlCa8mvZ3rJNb+lXRlMQx bJ8z1M/IaHBzAlMhc+7OrhT3O75dgWivH+lSJ9MG7cpeAmVLcRn6XDPyb7xN7Yxu/2lnWGltYWVNe z+Yo89Uh3IKpLS68teze64JqAViOWzTPwtNAou4hoe/A8aGsBkhSC3nbvbQtP7RlY7z53bYxVxP6Q qDWzVV2yby1B1p0zTp+g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lp3df-00BnWn-97; Fri, 04 Jun 2021 06:49:03 +0000 Received: from szxga03-in.huawei.com ([45.249.212.189]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lp3dZ-00BnVU-Tw for linux-arm-kernel@lists.infradead.org; Fri, 04 Jun 2021 06:49:01 +0000 Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.54]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4FxCtp2bjyz68ZN; Fri, 4 Jun 2021 14:45:06 +0800 (CST) Received: from dggpemm500022.china.huawei.com (7.185.36.162) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Fri, 4 Jun 2021 14:48:49 +0800 Received: from DESKTOP-7FEPK9S.china.huawei.com (10.174.185.220) by dggpemm500022.china.huawei.com (7.185.36.162) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Fri, 4 Jun 2021 14:48:48 +0800 From: Shenming Lu To: Marc Zyngier , Will Deacon , , , CC: , , Subject: [PATCH] KVM: arm64: vgic: Communicate a change of the IRQ state via vgic_queue_irq_unlock Date: Fri, 4 Jun 2021 14:48:28 +0800 Message-ID: <20210604064828.1497-1-lushenming@huawei.com> X-Mailer: git-send-email 2.27.0.windows.1 MIME-Version: 1.0 X-Originating-IP: [10.174.185.220] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm500022.china.huawei.com (7.185.36.162) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210603_234858_428521_3EE857B2 X-CRM114-Status: GOOD ( 23.58 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hi Marc, Some time ago, you suggested that we should communicate a change of the IRQ state via vgic_queue_irq_unlock [1], which needs to include dropping the IRQ from the VCPU's ap_list if the IRQ is not pending or enabled but on the ap_list. And I additionally add a case where the IRQ has to be migrated to another ap_list. (maybe you forget this...) Does this patch match your thought at the time? [1] https://lore.kernel.org/patchwork/patch/1371884/ Signed-off-by: Shenming Lu --- arch/arm64/kvm/vgic/vgic.c | 116 ++++++++++++++++++++++++------------- 1 file changed, 75 insertions(+), 41 deletions(-) diff --git a/arch/arm64/kvm/vgic/vgic.c b/arch/arm64/kvm/vgic/vgic.c index 15b666200f0b..9b88d49aa439 100644 --- a/arch/arm64/kvm/vgic/vgic.c +++ b/arch/arm64/kvm/vgic/vgic.c @@ -326,8 +326,9 @@ static bool vgic_validate_injection(struct vgic_irq *irq, bool level, void *owne /* * Check whether an IRQ needs to (and can) be queued to a VCPU's ap list. - * Do the queuing if necessary, taking the right locks in the right order. - * Returns true when the IRQ was queued, false otherwise. + * Do the queuing, dropping or migrating if necessary, taking the right + * locks in the right order. Returns true when the IRQ was queued, false + * otherwise. * * Needs to be entered with the IRQ lock already held, but will return * with all locks dropped. @@ -335,49 +336,38 @@ static bool vgic_validate_injection(struct vgic_irq *irq, bool level, void *owne bool vgic_queue_irq_unlock(struct kvm *kvm, struct vgic_irq *irq, unsigned long flags) { + struct kvm_vcpu *target_vcpu; struct kvm_vcpu *vcpu; + bool ret = false; lockdep_assert_held(&irq->irq_lock); retry: - vcpu = vgic_target_oracle(irq); - if (irq->vcpu || !vcpu) { + target_vcpu = vgic_target_oracle(irq); + vcpu = irq->vcpu; + if (target_vcpu == vcpu) { /* - * If this IRQ is already on a VCPU's ap_list, then it - * cannot be moved or modified and there is no more work for + * If this IRQ's state is consistent with whether on + * the right ap_lsit or not, there is no more work for * us to do. - * - * Otherwise, if the irq is not pending and enabled, it does - * not need to be inserted into an ap_list and there is also - * no more work for us to do. */ raw_spin_unlock_irqrestore(&irq->irq_lock, flags); - - /* - * We have to kick the VCPU here, because we could be - * queueing an edge-triggered interrupt for which we - * get no EOI maintenance interrupt. In that case, - * while the IRQ is already on the VCPU's AP list, the - * VCPU could have EOI'ed the original interrupt and - * won't see this one until it exits for some other - * reason. - */ - if (vcpu) { - kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu); - kvm_vcpu_kick(vcpu); - } - return false; + goto out; } /* * We must unlock the irq lock to take the ap_list_lock where - * we are going to insert this new pending interrupt. + * we are going to insert/drop this IRQ. */ raw_spin_unlock_irqrestore(&irq->irq_lock, flags); /* someone can do stuff here, which we re-check below */ - raw_spin_lock_irqsave(&vcpu->arch.vgic_cpu.ap_list_lock, flags); + if (target_vcpu) + raw_spin_lock_irqsave(&target_vcpu->arch.vgic_cpu.ap_list_lock, + flags); + if (vcpu) + raw_spin_lock_irqsave(&vcpu->arch.vgic_cpu.ap_list_lock, flags); raw_spin_lock(&irq->irq_lock); /* @@ -392,30 +382,74 @@ bool vgic_queue_irq_unlock(struct kvm *kvm, struct vgic_irq *irq, * In both cases, drop the locks and retry. */ - if (unlikely(irq->vcpu || vcpu != vgic_target_oracle(irq))) { + if (unlikely(target_vcpu != vgic_target_oracle(irq) || + vcpu != irq->vcpu)) { raw_spin_unlock(&irq->irq_lock); - raw_spin_unlock_irqrestore(&vcpu->arch.vgic_cpu.ap_list_lock, - flags); + if (target_vcpu) + raw_spin_unlock_irqrestore(&target_vcpu->arch.vgic_cpu.ap_list_lock, + flags); + if (vcpu) + raw_spin_unlock_irqrestore(&vcpu->arch.vgic_cpu.ap_list_lock, + flags); raw_spin_lock_irqsave(&irq->irq_lock, flags); goto retry; } - /* - * Grab a reference to the irq to reflect the fact that it is - * now in the ap_list. - */ - vgic_get_irq_kref(irq); - list_add_tail(&irq->ap_list, &vcpu->arch.vgic_cpu.ap_list_head); - irq->vcpu = vcpu; + if (!vcpu && target_vcpu) { + /* + * Insert this new pending interrupt. + * Grab a reference to the irq to reflect the fact that + * it is now in the ap_list. + */ + vgic_get_irq_kref(irq); + list_add_tail(&irq->ap_list, + &target_vcpu->arch.vgic_cpu.ap_list_head); + irq->vcpu = target_vcpu; + ret = true; + } else if (vcpu && !target_vcpu) { + /* + * This IRQ is not pending or enabled but on the ap_list, + * drop it from the ap_list. + */ + list_del(&irq->ap_list); + irq->vcpu = NULL; + raw_spin_unlock(&irq->irq_lock); + vgic_put_irq(vcpu->kvm, irq); + raw_spin_unlock_irqrestore(&vcpu->arch.vgic_cpu.ap_list_lock, + flags); + goto out; + } else { + /* This IRQ looks like it has to be migrated. */ + list_del(&irq->ap_list); + list_add_tail(&irq->ap_list, + &target_vcpu->arch.vgic_cpu.ap_list_head); + irq->vcpu = target_vcpu; + } raw_spin_unlock(&irq->irq_lock); - raw_spin_unlock_irqrestore(&vcpu->arch.vgic_cpu.ap_list_lock, flags); + if (target_vcpu) + raw_spin_unlock_irqrestore(&target_vcpu->arch.vgic_cpu.ap_list_lock, + flags); + if (vcpu) + raw_spin_unlock_irqrestore(&vcpu->arch.vgic_cpu.ap_list_lock, flags); - kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu); - kvm_vcpu_kick(vcpu); +out: + /* + * Even for the already queuing rightly case we have + * to kick the VCPU, because we could be queueing an + * edge-triggered interrupt for which we get no EOI + * maintenance interrupt. In that case, while the IRQ + * is already on the VCPU's AP list, the VCPU could + * have EOI'ed the original interrupt and won't see + * this one until it exits for some other reason. + */ + if (target_vcpu) { + kvm_make_request(KVM_REQ_IRQ_PENDING, target_vcpu); + kvm_vcpu_kick(target_vcpu); + } - return true; + return ret; } /**