From patchwork Fri Nov 14 01:57:47 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Smarduch X-Patchwork-Id: 5302641 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 31062C11AC for ; Fri, 14 Nov 2014 02:03:42 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 315FD20138 for ; Fri, 14 Nov 2014 02:03:41 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3817920121 for ; Fri, 14 Nov 2014 02:03:40 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xp6Cw-00047A-6L; Fri, 14 Nov 2014 02:01:54 +0000 Received: from mailout2.w2.samsung.com ([211.189.100.12] helo=usmailout2.samsung.com) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xp6Cr-0003mf-1I for linux-arm-kernel@lists.infradead.org; Fri, 14 Nov 2014 02:01:49 +0000 Received: from uscpsbgex2.samsung.com (u123.gpu85.samsung.co.kr [203.254.195.123]) by mailout2.w2.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0NF0007NXAY957A0@mailout2.w2.samsung.com> for linux-arm-kernel@lists.infradead.org; Thu, 13 Nov 2014 21:01:21 -0500 (EST) X-AuditID: cbfec37b-b7f296d000006be0-ac-546562712420 Received: from usmmp1.samsung.com ( [203.254.195.77]) by uscpsbgex2.samsung.com (USCPEXMTA) with SMTP id 8B.E7.27616.17265645; Thu, 13 Nov 2014 21:01:21 -0500 (EST) Received: from sisasmtp.sisa.samsung.com ([105.144.21.116]) by usmmp1.samsung.com (Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit (built Aug 30 2012)) with ESMTP id <0NF000DYTAY9M340@usmmp1.samsung.com>; Thu, 13 Nov 2014 21:01:21 -0500 (EST) Received: from mjsmard-530U3C-530U4C-532U3C.sisa.samsung.com (105.144.129.79) by SISAEX02SJ.sisa.samsung.com (105.144.21.116) with Microsoft SMTP Server (TLS) id 14.3.123.3; Thu, 13 Nov 2014 18:01:20 -0800 From: Mario Smarduch To: pbonzini@redhat.com, james.hogan@imgtec.com, christoffer.dall@linaro.org, agraf@suse.de, marc.zyngier@arm.com, cornelia.huck@de.ibm.com, borntraeger@de.ibm.com, catalin.marinas@arm.com Subject: [PATCH v14 6/7] KVM: arm: dirty logging write protect support Date: Thu, 13 Nov 2014 17:57:47 -0800 Message-id: <1415930268-7674-7-git-send-email-m.smarduch@samsung.com> X-Mailer: git-send-email 1.7.9.5 In-reply-to: <1415930268-7674-1-git-send-email-m.smarduch@samsung.com> References: <1415930268-7674-1-git-send-email-m.smarduch@samsung.com> MIME-version: 1.0 X-Originating-IP: [105.144.129.79] X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmpkkeLIzCtJLcpLzFFi42I5/e+wr25hUmqIwdrXShYnrvxjtJi+YjuL xftlPYwWL14DufObGxkt3s17wWzR/ayZ0eLNJ22LOVMLLT6eOs5usenxNVaLv3f+sVns3/aP 1WLOmQcsFpPebGNy4PdYM28No8fBR4fYPHp2nmH0uHNtD5vH+U1rmD02L6n3eL/vKpvH5tPV Hp83yQVwRnHZpKTmZJalFunbJXBldL5Yz1LwSb6ia3FKA+MNyS5GTg4JAROJLee+M0LYYhIX 7q1n62Lk4hASWMYosfL5K2YIp5dJor/jGguEc5FR4t78j8wgLWwCuhL7721kB0mICBxglDix 8RcTiMMs8JZRYsfJP0wgVcICbhJfrvxkA7FZBFQlHnydyApi8wq4SuzdcAsozgG0XEFiziQb EJMTqLz1pThIhRBQxaRD0xkhqgUlfky+xwJSwiwgIfH8sxJEiarEtpvPoT5Qkph2+Cr7BEah WUg6ZiF0LGBkWsUoVlqcXFCclJ5aYaRXnJhbXJqXrpecn7uJERJt1TsY7361OcQowMGoxMO7 wjk1RIg1say4MvcQowQHs5II76FIoBBvSmJlVWpRfnxRaU5q8SFGJg5OqQbGwj7zLyICikG7 d4RpFx18d2nqLJka2+79RiZKewOSQk+eY9xXIDM7f3L5sjQR00nc5g+WTJDc4DslTf18l5Lt Jp/2uS/1p7jesGJurfTk9eN/2NPQ/y30RIvXrJlz2GbwFe08z3u4MvZdWXyTesyUjf+5lr+5 8mp7TFDQ/L0mEotVmNf0/jqoxFKckWioxVxUnAgAAWciQpQCAAA= X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141113_180149_167780_1DDA8814 X-CRM114-Status: GOOD ( 14.74 ) X-Spam-Score: -6.0 (------) Cc: peter.maydell@linaro.org, kvm@vger.kernel.org, steve.capper@arm.com, kvm-ia64@vger.kernel.org, kvm-ppc@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Mario Smarduch X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-3.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add support to track dirty pages between user space KVM_GET_DIRTY_LOG ioctl calls. We call kvm_get_dirty_log_protect() function to do most of the work. Reviewed-by: Marc Zyngier Signed-off-by: Mario Smarduch --- arch/arm/kvm/Kconfig | 1 + arch/arm/kvm/arm.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++ arch/arm/kvm/mmu.c | 22 ++++++++++++++++++++++ 3 files changed, 69 insertions(+) diff --git a/arch/arm/kvm/Kconfig b/arch/arm/kvm/Kconfig index f27f336..a8d1ace 100644 --- a/arch/arm/kvm/Kconfig +++ b/arch/arm/kvm/Kconfig @@ -24,6 +24,7 @@ config KVM select HAVE_KVM_ARCH_TLB_FLUSH_ALL select KVM_MMIO select KVM_ARM_HOST + select KVM_GENERIC_DIRTYLOG_READ_PROTECT depends on ARM_VIRT_EXT && ARM_LPAE ---help--- Support hosting virtualized guest machines. You will also diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index a99e0cd..040c0f3 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -737,9 +737,55 @@ long kvm_arch_vcpu_ioctl(struct file *filp, } } +/** + * kvm_vm_ioctl_get_dirty_log - get and clear the log of dirty pages in a slot + * @kvm: kvm instance + * @log: slot id and address to which we copy the log + * + * We need to keep it in mind that VCPU threads can write to the bitmap + * concurrently. So, to avoid losing data, we keep the following order for + * each bit: + * + * 1. Take a snapshot of the bit and clear it if needed. + * 2. Write protect the corresponding page. + * 3. Copy the snapshot to the userspace. + * 4. Flush TLB's if needed. + * + * Steps 1,2,3 are handled by kvm_get_dirty_log_protect(). + * Between 2 and 4, the guest may write to the page using the remaining TLB + * entry. This is not a problem because the page is reported dirty using + * the snapshot taken before and step 4 ensures that writes done after + * exiting to userspace will be logged for the next call. + */ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log) { +#ifdef CONFIG_ARM + int r; + bool is_dirty = false; + + mutex_lock(&kvm->slots_lock); + + r = kvm_get_dirty_log_protect(kvm, log, &is_dirty); + if (r) + goto out; + + /* + * kvm_get_dirty_log_protect() may fail and we may skip TLB flush + * leaving few stale spte TLB entries which is harmless, given we're + * just write protecting spte's, so few stale TLB's will be left in + * original R/W state. And since the bitmap is corrupt userspace will + * error out anyway (i.e. during migration or dirty page loging for + * other reasons) terminating dirty page logging. + */ + if (is_dirty) + kvm_flush_remote_tlbs(kvm); +out: + mutex_unlock(&kvm->slots_lock); + + return r; +#else /* ARM64 */ return -EINVAL; +#endif } static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm, diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 1e8b6a9..8137455 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -870,6 +870,28 @@ void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot) spin_unlock(&kvm->mmu_lock); kvm_flush_remote_tlbs(kvm); } + +/** + * kvm_arch_mmu_write_protect_pt_masked() - write protect dirty pages + * @kvm: The KVM pointer + * @slot: The memory slot associated with mask + * @gfn_offset: The gfn offset in memory slot + * @mask: The mask of dirty pages at offset 'gfn_offset' in this memory + * slot to be write protected + * + * Walks bits set in mask write protects the associated pte's. Caller must + * acquire kvm_mmu_lock. + */ +void kvm_arch_mmu_write_protect_pt_masked(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn_offset, unsigned long mask) +{ + phys_addr_t base_gfn = slot->base_gfn + gfn_offset; + phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT; + phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; + + stage2_wp_range(kvm, start, end); +} #endif static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,