From patchwork Thu Apr 3 15:17:46 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 3932971 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 61FB19F38C for ; Thu, 3 Apr 2014 15:19:12 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 7907B201F9 for ; Thu, 3 Apr 2014 15:19:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 864BC201E4 for ; Thu, 3 Apr 2014 15:19:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752579AbaDCPSs (ORCPT ); Thu, 3 Apr 2014 11:18:48 -0400 Received: from mail-we0-f172.google.com ([74.125.82.172]:42346 "EHLO mail-we0-f172.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752391AbaDCPSr (ORCPT ); Thu, 3 Apr 2014 11:18:47 -0400 Received: by mail-we0-f172.google.com with SMTP id t61so2043062wes.17 for ; Thu, 03 Apr 2014 08:18:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=xn9QBGUTlB+yMrwetaKGF7dwlTzZnhAoRhG1ze79K+U=; b=RpG8IPpzdfAFL+Bvdt05kit73GnbDllsrzr9/tvkhjFc2xqQSF/8K3hKHTEoqkOZTg q5GnhqOQDHRMV2Cx4gXKMG5Avmca7rMiFTST0fWkGYu+h4YNcHv595D5MdzH5LvgnYPZ 1JYgb1i8o+ga6Nu4JRZKMktZ5zlvodspu2EVcvjEj4UKrHDcM5chipHW8OIYCqEbQ1Ue 5Sa6rpzKg1YOCk8MJT7eHGeqBnPpAIfuKmEfKh0oUZl5T96AbWy0yjksAg1ODl9UGPC8 b5IV1uk546a34NaVkXjt2FKPpXO2CtHSpPeT0GeoICjueBoO1V59lqUxJVfMeQiaxM+R kyJQ== X-Gm-Message-State: ALoCoQkKTKU1RYz8pGgMckUKJaXVCUhZvVDBbk8VGlT96E//LHpsXQSvhRS6P6A0p6Ya2vaqp6SP X-Received: by 10.180.19.138 with SMTP id f10mr12048233wie.11.1396538325641; Thu, 03 Apr 2014 08:18:45 -0700 (PDT) Received: from gnx2579.gnb.st.com (LCaen-156-56-7-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by mx.google.com with ESMTPSA id d6sm11810451wiz.4.2014.04.03.08.18.44 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 03 Apr 2014 08:18:44 -0700 (PDT) From: Eric Auger To: eric.auger@st.com, christoffer.dall@linaro.org, marc.zyngier@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, patches@linaro.org, christophe.barnichon@st.com, Eric Auger Subject: [PATCH] ARM: KVM: Handle IPA unmapping on memory region deletion Date: Thu, 3 Apr 2014 17:17:46 +0200 Message-Id: <1396538266-13245-1-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently when a KVM region is removed using kvm_vm_ioctl_set_memory_region (with memory region size equal to 0), the corresponding intermediate physical memory is not unmapped. This patch unmaps the region's IPA range in kvm_arch_commit_memory_region using unmap_stage2_range. The patch was tested on QEMU VFIO based use case where RAM memory region creation/deletion frequently happens for IRQ handling. Notes: - the KVM_MR_MOVE case shall request some similar addition but I cannot test this currently Signed-off-by: Eric Auger --- arch/arm/include/asm/kvm_mmu.h | 2 ++ arch/arm/kvm/arm.c | 8 ++++++++ arch/arm/kvm/mmu.c | 2 +- 3 files changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 2d122ad..a91c863 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -52,6 +52,8 @@ void kvm_free_stage2_pgd(struct kvm *kvm); int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, phys_addr_t pa, unsigned long size); +void unmap_stage2_range(struct kvm *kvm, phys_addr_t guest_ipa, u64 size); + int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run); void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu); diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index bd18bb8..9a4bc10 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -241,6 +241,14 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_memory_slot *old, enum kvm_mr_change change) { + if (change == KVM_MR_DELETE) { + gpa_t gpa = old->base_gfn << PAGE_SHIFT; + u64 size = old->npages << PAGE_SHIFT; + + spin_lock(&kvm->mmu_lock); + unmap_stage2_range(kvm, gpa, size); + spin_unlock(&kvm->mmu_lock); + } } void kvm_arch_flush_shadow_all(struct kvm *kvm) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 7789857..e8580e2 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -443,7 +443,7 @@ int kvm_alloc_stage2_pgd(struct kvm *kvm) * destroying the VM), otherwise another faulting VCPU may come in and mess * with things behind our backs. */ -static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size) +void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size) { unmap_range(kvm, kvm->arch.pgd, start, size); }