From patchwork Fri Jun 6 09:10:23 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 4310801 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id DA1A79F387 for ; Fri, 6 Jun 2014 09:14:54 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id EDBC3201CE for ; Fri, 6 Jun 2014 09:14:53 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 140F920122 for ; Fri, 6 Jun 2014 09:14:53 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WsqBk-0002ve-5r; Fri, 06 Jun 2014 09:11:52 +0000 Received: from mail-we0-f175.google.com ([74.125.82.175]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WsqBc-0002qk-7y for linux-arm-kernel@lists.infradead.org; Fri, 06 Jun 2014 09:11:49 +0000 Received: by mail-we0-f175.google.com with SMTP id p10so2519181wes.34 for ; Fri, 06 Jun 2014 02:10:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=ORGGaQT1H3DSPyKB9Rpr+rP83+3YiQGIufcGnaaK1EQ=; b=Y+tqHe7SM6YII75I7c/Ix3iqyjR9yxVzK34P/mjg1UD2ZIk2WMTbyCtPthqBebbxGs 0dt+f+MIBG+MP/pWQX0S+O88IMOors1/35BUFyfWsvKVPbwEfm5xa+hZMjaiLehKMBW6 Ci/1eMS9pAQAwXfJ+O4TSCMm129P2pXqn8SS+cJiy4zuTw3gxGOIDRLJNeuwReuLo7C3 8Ve793Ob69TshIO3trXOnOSOgYrSSdw1sSXrhSJ+jgLVIkml0ypGY3tx9oaLOw+5PAGX VDqn4YfPw3BmwcVw8Izo2BIewy2Dlo5WfUozQCy+doauHOqSiP33UPuaNB+g9l1+Jzsb n06w== X-Gm-Message-State: ALoCoQnyrygE00QlH6k+Yqj+LrmL7I4N4r3ANHdC9twnzNUnAEsSwhu5KegScyYtZ/9TbUEfk8Gw X-Received: by 10.180.187.135 with SMTP id fs7mr5908407wic.37.1402045854235; Fri, 06 Jun 2014 02:10:54 -0700 (PDT) Received: from gnx2579.gnb.st.com (LCaen-156-56-7-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by mx.google.com with ESMTPSA id je7sm20230605wic.14.2014.06.06.02.10.52 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 06 Jun 2014 02:10:53 -0700 (PDT) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, christoffer.dall@linaro.org, marc.zyngier@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH v3] ARM: KVM: Unmap IPA on memslot delete/move Date: Fri, 6 Jun 2014 11:10:23 +0200 Message-Id: <1402045823-26912-1-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140606_021144_437625_0A5045EC X-CRM114-Status: GOOD ( 14.87 ) X-Spam-Score: -0.7 (/) Cc: linux-kernel@vger.kernel.org, christophe.barnichon@st.com, patches@linaro.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently when a KVM region is deleted or moved after KVM_SET_USER_MEMORY_REGION ioctl, the corresponding intermediate physical memory is not unmapped. This patch corrects this and unmaps the region's IPA range in kvm_arch_commit_memory_region using unmap_stage2_range. Signed-off-by: Eric Auger Acked-by: Marc Zyngier --- The patch was tested with QEMU using the VFIO platform device. In a specific IRQ handling case, the device regularly deletes/creates some RAM regions. v3: - replace u64 by phys_addr_t in kvm_arch_commit_memory_region v2: - KVM_MR_MOVE case also handled and tested using a QEMU hack - memslot and memory_region stubs moved from arm.c to mmu.c following Marc Zyngier recommendations. --- arch/arm/kvm/arm.c | 37 ------------------------------------- arch/arm/kvm/mmu.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 46 insertions(+), 37 deletions(-) diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index f0e50a0..bcc2929 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -155,16 +155,6 @@ int kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf) return VM_FAULT_SIGBUS; } -void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free, - struct kvm_memory_slot *dont) -{ -} - -int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot, - unsigned long npages) -{ - return 0; -} /** * kvm_arch_destroy_vm - destroy the VM data structure @@ -224,33 +214,6 @@ long kvm_arch_dev_ioctl(struct file *filp, return -EINVAL; } -void kvm_arch_memslots_updated(struct kvm *kvm) -{ -} - -int kvm_arch_prepare_memory_region(struct kvm *kvm, - struct kvm_memory_slot *memslot, - struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change) -{ - return 0; -} - -void kvm_arch_commit_memory_region(struct kvm *kvm, - struct kvm_userspace_memory_region *mem, - const struct kvm_memory_slot *old, - enum kvm_mr_change change) -{ -} - -void kvm_arch_flush_shadow_all(struct kvm *kvm) -{ -} - -void kvm_arch_flush_shadow_memslot(struct kvm *kvm, - struct kvm_memory_slot *slot) -{ -} struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id) { diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 16f8049..cc5ca9e 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -1100,3 +1100,49 @@ out: free_hyp_pgds(); return err; } + +void kvm_arch_commit_memory_region(struct kvm *kvm, + struct kvm_userspace_memory_region *mem, + const struct kvm_memory_slot *old, + enum kvm_mr_change change) +{ + gpa_t gpa = old->base_gfn << PAGE_SHIFT; + phys_addr_t size = old->npages << PAGE_SHIFT; + if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) { + spin_lock(&kvm->mmu_lock); + unmap_stage2_range(kvm, gpa, size); + spin_unlock(&kvm->mmu_lock); + } +} + +int kvm_arch_prepare_memory_region(struct kvm *kvm, + struct kvm_memory_slot *memslot, + struct kvm_userspace_memory_region *mem, + enum kvm_mr_change change) +{ + return 0; +} + +void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free, + struct kvm_memory_slot *dont) +{ +} + +int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot, + unsigned long npages) +{ + return 0; +} + +void kvm_arch_memslots_updated(struct kvm *kvm) +{ +} + +void kvm_arch_flush_shadow_all(struct kvm *kvm) +{ +} + +void kvm_arch_flush_shadow_memslot(struct kvm *kvm, + struct kvm_memory_slot *slot) +{ +}