From patchwork Thu May 19 18:08:10 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 9128287 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8B5DA60221 for ; Thu, 19 May 2016 18:31:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 822F5256D1 for ; Thu, 19 May 2016 18:31:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7724C281B9; Thu, 19 May 2016 18:31:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8E4D0256D1 for ; Thu, 19 May 2016 18:31:41 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1b3Si5-00038V-Pl; Thu, 19 May 2016 18:30:13 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1b3SNa-0001FC-VX for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2016 18:09:09 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5875C437; Thu, 19 May 2016 11:09:19 -0700 (PDT) Received: from e104803-lin.lan (unknown [10.1.203.153]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1DE463F253; Thu, 19 May 2016 11:08:58 -0700 (PDT) From: Andre Przywara To: Marc Zyngier , Christoffer Dall Subject: [PATCH v5 31/57] KVM: arm/arm64: vgic-new: Add TARGET registers handlers Date: Thu, 19 May 2016 19:08:10 +0100 Message-Id: <1463681316-23039-32-git-send-email-andre.przywara@arm.com> X-Mailer: git-send-email 2.8.2 In-Reply-To: <1463681316-23039-1-git-send-email-andre.przywara@arm.com> References: <1463681316-23039-1-git-send-email-andre.przywara@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160519_110903_308929_30321262 X-CRM114-Status: GOOD ( 11.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Eric Auger MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The target register handlers are v2 emulation specific, so their implementation lives entirely in vgic-mmio-v2.c. We copy the old VGIC behaviour of assigning an IRQ to the first VCPU set in the target mask instead of making it possibly pending on multiple VCPUs. Signed-off-by: Andre Przywara Reviewed-by: Christoffer Dall --- Changelog RFC..v1: - remove runtime VCPU determination from this v2-only register - fold in implementation of vgic_v2_irq_change_affinity() - replace ffs() with __ffs() Changelog v1 .. v2: - adapt to new MMIO framework Changelog v3 .. v4: - specify accessor width - use IRQ number accessor macro virt/kvm/arm/vgic/vgic-mmio-v2.c | 43 +++++++++++++++++++++++++++++++++++++++- 1 file changed, 42 insertions(+), 1 deletion(-) diff --git a/virt/kvm/arm/vgic/vgic-mmio-v2.c b/virt/kvm/arm/vgic/vgic-mmio-v2.c index bb7389e..52389ff 100644 --- a/virt/kvm/arm/vgic/vgic-mmio-v2.c +++ b/virt/kvm/arm/vgic/vgic-mmio-v2.c @@ -64,6 +64,47 @@ static void vgic_mmio_write_v2_misc(struct kvm_vcpu *vcpu, } } +static unsigned long vgic_mmio_read_target(struct kvm_vcpu *vcpu, + gpa_t addr, unsigned int len) +{ + u32 intid = VGIC_ADDR_TO_INTID(addr, 8); + int i; + u64 val = 0; + + for (i = 0; i < len; i++) { + struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i); + + val |= (u64)irq->targets << (i * 8); + } + + return val; +} + +static void vgic_mmio_write_target(struct kvm_vcpu *vcpu, + gpa_t addr, unsigned int len, + unsigned long val) +{ + u32 intid = VGIC_ADDR_TO_INTID(addr, 8); + int i; + + /* GICD_ITARGETSR[0-7] are read-only */ + if (intid < VGIC_NR_PRIVATE_IRQS) + return; + + for (i = 0; i < len; i++) { + struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, NULL, intid + i); + int target; + + spin_lock(&irq->irq_lock); + + irq->targets = (val >> (i * 8)) & 0xff; + target = irq->targets ? __ffs(irq->targets) : 0; + irq->target_vcpu = kvm_get_vcpu(vcpu->kvm, target); + + spin_unlock(&irq->irq_lock); + } +} + static const struct vgic_register_region vgic_v2_dist_registers[] = { REGISTER_DESC_WITH_LENGTH(GIC_DIST_CTRL, vgic_mmio_read_v2_misc, vgic_mmio_write_v2_misc, 12, @@ -93,7 +134,7 @@ static const struct vgic_register_region vgic_v2_dist_registers[] = { vgic_mmio_read_priority, vgic_mmio_write_priority, 8, VGIC_ACCESS_32bit | VGIC_ACCESS_8bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_TARGET, - vgic_mmio_read_raz, vgic_mmio_write_wi, 8, + vgic_mmio_read_target, vgic_mmio_write_target, 8, VGIC_ACCESS_32bit | VGIC_ACCESS_8bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_CONFIG, vgic_mmio_read_config, vgic_mmio_write_config, 2,