From patchwork Tue Jun 11 04:51:18 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 2699921 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) by patchwork2.kernel.org (Postfix) with ESMTP id CF2BADF23A for ; Tue, 11 Jun 2013 04:54:16 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UmGZx-0006zb-2a; Tue, 11 Jun 2013 04:53:10 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1UmGZc-00024K-T4; Tue, 11 Jun 2013 04:52:48 +0000 Received: from mail-pa0-x233.google.com ([2607:f8b0:400e:c03::233]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UmGZO-0001yh-Ca for linux-arm-kernel@lists.infradead.org; Tue, 11 Jun 2013 04:52:37 +0000 Received: by mail-pa0-f51.google.com with SMTP id lf11so2445005pab.10 for ; Mon, 10 Jun 2013 21:52:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references :x-gm-message-state; bh=RdxSPDFTvR6JIwOgISOawXr+Ey/9cVQZFrZDcDoly+Y=; b=LvaPo7XK0pVa2STobM88lTzs0DwZlOsrN28Fk7tjCr5TbuMJj+ougp3+et+otiEx0v NSWykKfMfu2Fy1ipJ7BbD1RN9Nhgn1eqI8FN+dKCHKVFagGoSkgV+9b9zzB4EgyfTi+L QddftLPZI7Jqtds4Hd5ErGwereE8kTXmYd26p2OpuX9sZyfLW/sQ1khIeti8kq51BRdb dXNuJiRj2YUDH365cYJin3KARnPKojvHQEYGi83imiE6Y13ZK/KkgyMKFrTJW1NWhv4F RESyc+NlS10RaCgnyXZAvfsMjBZP+N2MqZ+voGSXZH6ha32k64QnzqmziWzewN6rHumQ x+ew== X-Received: by 10.68.233.98 with SMTP id tv2mr12678706pbc.146.1370926332674; Mon, 10 Jun 2013 21:52:12 -0700 (PDT) Received: from localhost.localdomain (c-67-169-183-77.hsd1.ca.comcast.net. [67.169.183.77]) by mx.google.com with ESMTPSA id i16sm18268008pag.18.2013.06.10.21.52.11 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 10 Jun 2013 21:52:12 -0700 (PDT) From: Christoffer Dall To: linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Subject: [PATCH 6/7] KVM: arm-vgic: Add GICD_SPENDSGIR and GICD_CPENDSGIR handlers Date: Mon, 10 Jun 2013 21:51:18 -0700 Message-Id: <1370926279-32532-7-git-send-email-christoffer.dall@linaro.org> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1370926279-32532-1-git-send-email-christoffer.dall@linaro.org> References: <1370926279-32532-1-git-send-email-christoffer.dall@linaro.org> X-Gm-Message-State: ALoCoQl1kjkctb5baVSyunohSSukIfKlvQVm448V3VnTiCdgzZA87d5AlqyK1MGytuFH4pMw6XgV X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130611_005234_595088_0A5AF4E7 X-CRM114-Status: GOOD ( 14.48 ) X-Spam-Score: 0.6 (/) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (0.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- 2.5 SUSPICIOUS_RECIPS Similar addresses in recipient list -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: linaro-kernel@lists.linaro.org, Christoffer Dall , patches@linaro.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Handle MMIO accesses to the two registers which should support both the case where the VMs want to read/write either of these registers and the case where user space reads/writes these registers to do save/restore of the VGIC state. Note that the added complexity compared to simple set/clear enable registers stems from the bookkeping of source cpu ids. It may be possible to change the underlying data structure to simplify the complexity, but since this is not in the critical path, at all, this is left as an interesting excercise to the reader. Signed-off-by: Christoffer Dall --- virt/kvm/arm/vgic.c | 114 ++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 112 insertions(+), 2 deletions(-) diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c index 4821ce9..62d0fec 100644 --- a/virt/kvm/arm/vgic.c +++ b/virt/kvm/arm/vgic.c @@ -591,18 +591,128 @@ static bool handle_mmio_sgi_reg(struct kvm_vcpu *vcpu, return false; } +static void read_sgi_set_clear(struct kvm_vcpu *vcpu, + struct kvm_exit_mmio *mmio, + phys_addr_t offset) +{ + struct vgic_dist *dist = &vcpu->kvm->arch.vgic; + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; + int i, sgi, cpu; + int min_sgi = (offset & ~0x3) * 4; + int max_sgi = min_sgi + 3; + int vcpu_id = vcpu->vcpu_id; + u32 lr, reg = 0; + + /* Copy source SGIs from distributor side */ + for (sgi = min_sgi; sgi <= max_sgi; sgi++) { + int shift = 8 * (sgi - min_sgi); + reg |= (u32)dist->irq_sgi_sources[vcpu_id][sgi] << shift; + } + + /* Copy source SGIs already on LRs */ + for_each_set_bit(i, vgic_cpu->lr_used, vgic_cpu->nr_lr) { + lr = vgic_cpu->vgic_lr[i]; + sgi = lr & GICH_LR_VIRTUALID; + cpu = (lr & GICH_LR_PHYSID_CPUID) >> GICH_LR_PHYSID_CPUID_SHIFT; + if (sgi >= min_sgi && sgi <= max_sgi) { + if (lr & GICH_LR_STATE) + reg |= (1 << cpu) << (8 * (sgi - min_sgi)); + } + } + + memcpy(mmio->data, ®, sizeof(reg)); +} + static bool handle_mmio_sgi_clear(struct kvm_vcpu *vcpu, struct kvm_exit_mmio *mmio, phys_addr_t offset) { - return false; + struct vgic_dist *dist = &vcpu->kvm->arch.vgic; + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; + int i, sgi, cpu; + int min_sgi = (offset & ~0x3) * 4; + int max_sgi = min_sgi + 3; + int vcpu_id = vcpu->vcpu_id; + u32 *lr, reg; + bool updated = false; + + if (!mmio->is_write) { + read_sgi_set_clear(vcpu, mmio, offset); + return false; + } + + memcpy(®, mmio->data, sizeof(reg)); + + /* Clear pending SGIs on distributor side */ + for (sgi = min_sgi; sgi <= max_sgi; sgi++) { + u8 mask = reg >> (8 * (sgi - min_sgi)); + if (dist->irq_sgi_sources[vcpu_id][sgi] & mask) + updated = true; + dist->irq_sgi_sources[vcpu_id][sgi] &= ~mask; + } + + /* Clear SGIs already on LRs */ + for_each_set_bit(i, vgic_cpu->lr_used, vgic_cpu->nr_lr) { + lr = &vgic_cpu->vgic_lr[i]; + sgi = *lr & GICH_LR_VIRTUALID; + cpu = (*lr & GICH_LR_PHYSID_CPUID) >> GICH_LR_PHYSID_CPUID_SHIFT; + + if (sgi >= min_sgi && sgi <= max_sgi) { + if (reg & ((1 << cpu) << (8 * (sgi - min_sgi)))) { + if (*lr & GICH_LR_PENDING_BIT) + updated = true; + *lr &= GICH_LR_PENDING_BIT; + } + } + } + + return updated; } static bool handle_mmio_sgi_set(struct kvm_vcpu *vcpu, struct kvm_exit_mmio *mmio, phys_addr_t offset) { - return false; + struct vgic_dist *dist = &vcpu->kvm->arch.vgic; + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; + int i, sgi, cpu; + int min_sgi = (offset & ~0x3) * 4; + int max_sgi = min_sgi + 3; + int vcpu_id = vcpu->vcpu_id; + u32 *lr, reg; + bool updated = false; + + if (!mmio->is_write) { + read_sgi_set_clear(vcpu, mmio, offset); + return false; + } + + memcpy(®, mmio->data, sizeof(reg)); + + /* Set pending SGIs on distributor side */ + for (sgi = min_sgi; sgi <= max_sgi; sgi++) { + u8 mask = reg >> (8 * (sgi - min_sgi)); + if ((dist->irq_sgi_sources[vcpu_id][sgi] & mask) != mask) + updated = true; + dist->irq_sgi_sources[vcpu_id][sgi] |= mask; + } + + /* Set active SGIs already on LRs to pending and active */ + for_each_set_bit(i, vgic_cpu->lr_used, vgic_cpu->nr_lr) { + lr = &vgic_cpu->vgic_lr[i]; + sgi = *lr & GICH_LR_VIRTUALID; + cpu = (*lr & GICH_LR_PHYSID_CPUID) >> GICH_LR_PHYSID_CPUID_SHIFT; + + if (sgi >= min_sgi && sgi <= max_sgi) { + if (reg & ((1 << cpu) << (8 * (sgi - min_sgi)))) { + if (!(*lr & GICH_LR_PENDING_BIT)) + updated = true; + *lr |= GICH_LR_PENDING_BIT; + } + } + } + + return updated; } /*