From patchwork Thu Apr 3 15:17:46 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 3932981 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 0FA6E9F38C for ; Thu, 3 Apr 2014 15:19:38 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3EFD62021B for ; Thu, 3 Apr 2014 15:19:37 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1B3B9201F9 for ; Thu, 3 Apr 2014 15:19:36 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WVjQB-00080z-Me; Thu, 03 Apr 2014 15:19:15 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WVjQ9-0005wY-4L; Thu, 03 Apr 2014 15:19:13 +0000 Received: from mail-wg0-f52.google.com ([74.125.82.52]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WVjQ4-0005vf-IZ for linux-arm-kernel@lists.infradead.org; Thu, 03 Apr 2014 15:19:09 +0000 Received: by mail-wg0-f52.google.com with SMTP id k14so2061699wgh.35 for ; Thu, 03 Apr 2014 08:18:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=xn9QBGUTlB+yMrwetaKGF7dwlTzZnhAoRhG1ze79K+U=; b=kaD3ioJjqo88lG7TxbjC+G/lmk0j7VkTkvPYWsvNareNcnR64LhhaXl6XE6Vr0AJAg 8POJHsaQWISDcb1jY8UWHGRQ8L8bZ6fgp9D8nRolJRWRvKirHPbMg1iVytzQr3Yex9Tw Liv26ehp3Vyr6bvoAPO2NKHvLnjSL/7NKcIMhbDL6o0a6akkhFgFTUYmXPCssnqgzNZ7 fC3kPQ1RPT8ykc6pkyD7IBwFyiV6OTuB8SvqrYFXUnbumWSUzwgqKHZPr297XObZ9Mrw Ep5Cd/zR0iH2+nGDVT2UCaRFzrwku41e9VyRUdXq2TQwooXhsrQgfMKfTvTmpQDyYt0B Eb3w== X-Gm-Message-State: ALoCoQn9YKEkDLDrt5e2+wvUW48Qie4+CPx1d8Nldu9cyAJQLqo4eLBOrXoNskshHuKropHm+BgU X-Received: by 10.180.19.138 with SMTP id f10mr12048233wie.11.1396538325641; Thu, 03 Apr 2014 08:18:45 -0700 (PDT) Received: from gnx2579.gnb.st.com (LCaen-156-56-7-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by mx.google.com with ESMTPSA id d6sm11810451wiz.4.2014.04.03.08.18.44 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 03 Apr 2014 08:18:44 -0700 (PDT) From: Eric Auger To: eric.auger@st.com, christoffer.dall@linaro.org, marc.zyngier@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH] ARM: KVM: Handle IPA unmapping on memory region deletion Date: Thu, 3 Apr 2014 17:17:46 +0200 Message-Id: <1396538266-13245-1-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140403_111908_719455_10C4E7EC X-CRM114-Status: GOOD ( 10.96 ) X-Spam-Score: -2.6 (--) Cc: Eric Auger , linux-kernel@vger.kernel.org, christophe.barnichon@st.com, patches@linaro.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently when a KVM region is removed using kvm_vm_ioctl_set_memory_region (with memory region size equal to 0), the corresponding intermediate physical memory is not unmapped. This patch unmaps the region's IPA range in kvm_arch_commit_memory_region using unmap_stage2_range. The patch was tested on QEMU VFIO based use case where RAM memory region creation/deletion frequently happens for IRQ handling. Notes: - the KVM_MR_MOVE case shall request some similar addition but I cannot test this currently Signed-off-by: Eric Auger --- arch/arm/include/asm/kvm_mmu.h | 2 ++ arch/arm/kvm/arm.c | 8 ++++++++ arch/arm/kvm/mmu.c | 2 +- 3 files changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 2d122ad..a91c863 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -52,6 +52,8 @@ void kvm_free_stage2_pgd(struct kvm *kvm); int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, phys_addr_t pa, unsigned long size); +void unmap_stage2_range(struct kvm *kvm, phys_addr_t guest_ipa, u64 size); + int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run); void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu); diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index bd18bb8..9a4bc10 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -241,6 +241,14 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_memory_slot *old, enum kvm_mr_change change) { + if (change == KVM_MR_DELETE) { + gpa_t gpa = old->base_gfn << PAGE_SHIFT; + u64 size = old->npages << PAGE_SHIFT; + + spin_lock(&kvm->mmu_lock); + unmap_stage2_range(kvm, gpa, size); + spin_unlock(&kvm->mmu_lock); + } } void kvm_arch_flush_shadow_all(struct kvm *kvm) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 7789857..e8580e2 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -443,7 +443,7 @@ int kvm_alloc_stage2_pgd(struct kvm *kvm) * destroying the VM), otherwise another faulting VCPU may come in and mess * with things behind our backs. */ -static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size) +void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size) { unmap_range(kvm, kvm->arch.pgd, start, size); }