From patchwork Sat Nov 10 15:44:51 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 1724361 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork2.kernel.org (Postfix) with ESMTP id 14BDADFE7E for ; Sat, 10 Nov 2012 15:55:06 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1TXDMA-0004h6-Fb; Sat, 10 Nov 2012 15:52:27 +0000 Received: from mail-wi0-f177.google.com ([209.85.212.177]) by merlin.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1TXDEr-0007hp-6g for linux-arm-kernel@lists.infradead.org; Sat, 10 Nov 2012 15:44:55 +0000 Received: by mail-wi0-f177.google.com with SMTP id c10so180734wiw.0 for ; Sat, 10 Nov 2012 07:44:52 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=subject:to:from:date:message-id:in-reply-to:references:user-agent :mime-version:content-type:content-transfer-encoding :x-gm-message-state; bh=b47E4I0E6b3a4fqHLMzDskabvZWf/Jr7ol9160LO9gU=; b=ldWGO6EIcbLmWirOP9tNQ4qyeVJdVikexM+jifltFhlfacjcmtZe1lmhdCROVvwRPP kWE7zHHv3mefRejVAbbFdD42eH0+82Oerg/1NvKPzwih8stJoNZvAylXf9hQ4XW4z+H6 ZEuA3QYaeWTzD13dOr9LMHS1KP5eKp3qJ48LaoUb5FkhKF5VcZ7hzdhrd9sB5hXC4lvU MEo9xvgDtRFHytbg6Lls/eWYlZIqfPWrE1iwZRtd+80Y/EJRYCNcXHhweK3Uio0nCbKP JOIUm8i+ZiE8nmo9d7R7dEVn3jZ3fxdeMNT+drfCfPxQacrS14pkvfNX9oFm0G31r0Pq OvwQ== Received: by 10.180.19.8 with SMTP id a8mr7584452wie.17.1352562292892; Sat, 10 Nov 2012 07:44:52 -0800 (PST) Received: from [127.0.1.1] (ip1.c116.obr91.cust.comxnet.dk. [87.72.8.103]) by mx.google.com with ESMTPS id d17sm2796704wiw.6.2012.11.10.07.44.51 (version=TLSv1/SSLv3 cipher=OTHER); Sat, 10 Nov 2012 07:44:52 -0800 (PST) Subject: [PATCH v4 05/13] ARM: KVM: VGIC accept vcpu and dist base addresses from user space To: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu From: Christoffer Dall Date: Sat, 10 Nov 2012 16:44:51 +0100 Message-ID: <20121110154451.3061.74235.stgit@chazy-air> In-Reply-To: <20121110154358.3061.16338.stgit@chazy-air> References: <20121110154358.3061.16338.stgit@chazy-air> User-Agent: StGit/0.15 MIME-Version: 1.0 X-Gm-Message-State: ALoCoQnfuRYoGMZ4/bRy3PvsOPE3/U8GM3A0BA+SmvWlsfZQZKb/KuERxic8p0UWnsfMyJ3cDHPp X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20121110_104454_179136_BCC7738B X-CRM114-Status: GOOD ( 15.94 ) X-Spam-Score: -2.6 (--) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-2.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.212.177 listed in list.dnswl.org] -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org User space defines the model to emulate to a guest and should therefore decide which addresses are used for both the virtual CPU interface directly mapped in the guest physical address space and for the emulated distributor interface, which is mapped in software by the in-kernel VGIC support. Signed-off-by: Christoffer Dall --- arch/arm/include/asm/kvm_mmu.h | 2 + arch/arm/include/asm/kvm_vgic.h | 9 ++++++ arch/arm/kvm/arm.c | 16 ++++++++++ arch/arm/kvm/vgic.c | 61 +++++++++++++++++++++++++++++++++++++++ 4 files changed, 87 insertions(+), 1 deletion(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 9bd0508..0800531 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -26,6 +26,8 @@ * To save a bit of memory and to avoid alignment issues we assume 39-bit IPA * for now, but remember that the level-1 table must be aligned to its size. */ +#define KVM_PHYS_SHIFT (38) +#define KVM_PHYS_MASK ((1ULL << KVM_PHYS_SHIFT) - 1) #define PTRS_PER_PGD2 512 #define PGD2_ORDER get_order(PTRS_PER_PGD2 * sizeof(pgd_t)) diff --git a/arch/arm/include/asm/kvm_vgic.h b/arch/arm/include/asm/kvm_vgic.h index b444ecf..9ca8d21 100644 --- a/arch/arm/include/asm/kvm_vgic.h +++ b/arch/arm/include/asm/kvm_vgic.h @@ -20,6 +20,9 @@ #define __ASM_ARM_KVM_VGIC_H struct vgic_dist { + /* Distributor and vcpu interface mapping in the guest */ + phys_addr_t vgic_dist_base; + phys_addr_t vgic_cpu_base; }; struct vgic_cpu { @@ -31,6 +34,7 @@ struct kvm_run; struct kvm_exit_mmio; #ifdef CONFIG_KVM_ARM_VGIC +int kvm_vgic_set_addr(struct kvm *kvm, unsigned long type, u64 addr); bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_exit_mmio *mmio); @@ -40,6 +44,11 @@ static inline int kvm_vgic_hyp_init(void) return 0; } +static inline int kvm_vgic_set_addr(struct kvm *kvm, unsigned long type, u64 addr) +{ + return 0; +} + static inline int kvm_vgic_init(struct kvm *kvm) { return 0; diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 426828a..3ac1aab 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -61,6 +61,8 @@ static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1); static u8 kvm_next_vmid; static DEFINE_SPINLOCK(kvm_vmid_lock); +static bool vgic_present; + static void kvm_arm_set_running_vcpu(struct kvm_vcpu *vcpu) { BUG_ON(preemptible()); @@ -825,7 +827,19 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log) static int kvm_vm_ioctl_set_device_address(struct kvm *kvm, struct kvm_device_address *dev_addr) { - return -ENODEV; + unsigned long dev_id, type; + + dev_id = (dev_addr->id & KVM_DEVICE_ID_MASK) >> KVM_DEVICE_ID_SHIFT; + type = (dev_addr->id & KVM_DEVICE_TYPE_MASK) >> KVM_DEVICE_TYPE_SHIFT; + + switch (dev_id) { + case KVM_ARM_DEVICE_VGIC_V2: + if (!vgic_present) + return -ENXIO; + return kvm_vgic_set_addr(kvm, type, dev_addr->addr); + default: + return -ENODEV; + } } long kvm_arch_vm_ioctl(struct file *filp, diff --git a/arch/arm/kvm/vgic.c b/arch/arm/kvm/vgic.c index 26ada3b..f85b275 100644 --- a/arch/arm/kvm/vgic.c +++ b/arch/arm/kvm/vgic.c @@ -22,6 +22,13 @@ #include #include +#define VGIC_ADDR_UNDEF (-1) +#define IS_VGIC_ADDR_UNDEF(_x) ((_x) == (typeof(_x))VGIC_ADDR_UNDEF) + +#define VGIC_DIST_SIZE 0x1000 +#define VGIC_CPU_SIZE 0x2000 + + #define ACCESS_READ_VALUE (1 << 0) #define ACCESS_READ_RAZ (0 << 0) #define ACCESS_READ_MASK(x) ((x) & (1 << 0)) @@ -136,3 +143,57 @@ bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_exi { return KVM_EXIT_MMIO; } + +static bool vgic_ioaddr_overlap(struct kvm *kvm) +{ + phys_addr_t dist = kvm->arch.vgic.vgic_dist_base; + phys_addr_t cpu = kvm->arch.vgic.vgic_cpu_base; + + if (IS_VGIC_ADDR_UNDEF(dist) || IS_VGIC_ADDR_UNDEF(cpu)) + return false; + if ((dist <= cpu && dist + VGIC_DIST_SIZE > cpu) || + (cpu <= dist && cpu + VGIC_CPU_SIZE > dist)) + return true; + return false; +} + +int kvm_vgic_set_addr(struct kvm *kvm, unsigned long type, u64 addr) +{ + int r = 0; + struct vgic_dist *vgic = &kvm->arch.vgic; + + if (addr & ~KVM_PHYS_MASK) + return -E2BIG; + + if (addr & ~PAGE_MASK) + return -EINVAL; + + mutex_lock(&kvm->lock); + switch (type) { + case KVM_VGIC_V2_ADDR_TYPE_DIST: + if (!IS_VGIC_ADDR_UNDEF(vgic->vgic_dist_base)) + return -EEXIST; + if (addr + VGIC_DIST_SIZE < addr) + return -EINVAL; + kvm->arch.vgic.vgic_dist_base = addr; + break; + case KVM_VGIC_V2_ADDR_TYPE_CPU: + if (!IS_VGIC_ADDR_UNDEF(vgic->vgic_cpu_base)) + return -EEXIST; + if (addr + VGIC_CPU_SIZE < addr) + return -EINVAL; + kvm->arch.vgic.vgic_cpu_base = addr; + break; + default: + r = -ENODEV; + } + + if (vgic_ioaddr_overlap(kvm)) { + kvm->arch.vgic.vgic_dist_base = VGIC_ADDR_UNDEF; + kvm->arch.vgic.vgic_cpu_base = VGIC_ADDR_UNDEF; + return -EINVAL; + } + + mutex_unlock(&kvm->lock); + return r; +}