From patchwork Thu Jun 19 01:31:39 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Smarduch X-Patchwork-Id: 4380251 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 881749F3DF for ; Thu, 19 Jun 2014 01:34:40 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 7A12220386 for ; Thu, 19 Jun 2014 01:34:39 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8FA7020380 for ; Thu, 19 Jun 2014 01:34:38 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WxRDK-000412-3V; Thu, 19 Jun 2014 01:32:30 +0000 Received: from mailout3.w2.samsung.com ([211.189.100.13] helo=usmailout3.samsung.com) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WxRCz-0003tj-8v for linux-arm-kernel@lists.infradead.org; Thu, 19 Jun 2014 01:32:10 +0000 Received: from uscpsbgex1.samsung.com (u122.gpu85.samsung.co.kr [203.254.195.122]) by usmailout3.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0N7E006TA6WX8G50@usmailout3.samsung.com> for linux-arm-kernel@lists.infradead.org; Wed, 18 Jun 2014 21:31:45 -0400 (EDT) X-AuditID: cbfec37a-b7f206d000005572-7b-53a23d81c326 Received: from usmmp2.samsung.com ( [203.254.195.78]) by uscpsbgex1.samsung.com (USCPEXMTA) with SMTP id 17.04.21874.18D32A35; Wed, 18 Jun 2014 21:31:45 -0400 (EDT) Received: from sisasmtp.sisa.samsung.com ([105.144.21.116]) by usmmp2.samsung.com (Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit (built Aug 30 2012)) with ESMTP id <0N7E00BDV6WW3Y10@usmmp2.samsung.com>; Wed, 18 Jun 2014 21:31:45 -0400 (EDT) Received: from mjsmard-530U3C-530U4C-532U3C.sisa.samsung.com (105.144.129.76) by SISAEX02SJ.sisa.samsung.com (105.144.21.116) with Microsoft SMTP Server (TLS) id 14.1.421.2; Wed, 18 Jun 2014 18:31:43 -0700 From: Mario Smarduch To: kvmarm@lists.cs.columbia.edu, marc.zyngier@arm.com, christoffer.dall@linaro.org Subject: [PATCH v8 3/4] arm: dirty log write protect management support Date: Wed, 18 Jun 2014 18:31:39 -0700 Message-id: <1403141500-31010-4-git-send-email-m.smarduch@samsung.com> X-Mailer: git-send-email 1.7.9.5 In-reply-to: <1403141500-31010-1-git-send-email-m.smarduch@samsung.com> References: <1403141500-31010-1-git-send-email-m.smarduch@samsung.com> MIME-version: 1.0 X-Originating-IP: [105.144.129.76] X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrJLMWRmVeSWpSXmKPExsVy+t9hP91G20XBBj9WcVm8eP2P0aL3/0VW i/tXvzNazJlaaPHx1HF2i02Pr7Fa/L3zj81izpkHLBaT3mxjcuD0WDNvDaPHrIZeNo871/aw eZzftIbZY/OSeo++LasYPT5vkgtgj+KySUnNySxLLdK3S+DK+D6huqBVreLCW/kGxstyXYyc HBICJhLHHu9jhrDFJC7cW8/WxcjFISSwjFHi7cHZUE4vk8Tq7fNZIJyLjBLv709gB2lhE9CV 2H9vI5gtIhAqcf1vIxNIEbPAaUaJrkVbwBLCAu4SO+5vZgGxWQRUJS4//sMKYvMKuEksmbwR yOYA2q0gMWeSDUiYE6j8+c0PYCcJAZVcmPeOBaJcUOLH5HssIOXMAhISzz8rQZSoSmy7+ZwR YoqSxOoj5hMYhWYhaZiF0LCAkWkVo1hpcXJBcVJ6aoWhXnFibnFpXrpecn7uJkZIdFTtYLzz 1eYQowAHoxIP74LLC4OFWBPLiitzDzFKcDArifDK/wIK8aYkVlalFuXHF5XmpBYfYmTi4JRq YHRawrd2t+38fUtuLC6cl/X+RMod5w6O4zKXYoqjuaaJHOje2FrXySb/5PplqYuGs1b63bux ympv9Gtno+SZcgr5V89VKM4UuKodO/3disZTc263ZizbvPfW4fDaOYI6yjHTLwrenpS/3Pah xFneDlnZ9W/Wbz878SZ7q3SBS9zzH36FvpvfCckrsRRnJBpqMRcVJwIAQVYVd2wCAAA= X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140618_183209_431797_64CF84EF X-CRM114-Status: GOOD ( 18.78 ) X-Spam-Score: -5.0 (-----) Cc: peter.maydell@linaro.org, kvm@vger.kernel.org, steve.capper@arm.com, linux-arm-kernel@lists.infradead.org, jays.lee@samsung.com, gavin.guo@canonical.com, Mario Smarduch X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch adds support for keeping track of VM dirty pages. As dirty page log is retrieved, the pages that have been written are write protected again for next write and log read. For ARMv8 read of dirty log returns invalid operation. Signed-off-by: Mario Smarduch --- arch/arm/include/asm/kvm_host.h | 3 ++ arch/arm/kvm/arm.c | 83 +++++++++++++++++++++++++++++++++++++++ arch/arm/kvm/mmu.c | 22 +++++++++++ 3 files changed, 108 insertions(+) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 586c467..dbf3d45 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -233,5 +233,8 @@ int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value); void kvm_tlb_flush_vmid(struct kvm *kvm); void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot); +void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn_offset, unsigned long mask); #endif /* __ARM_KVM_HOST_H__ */ diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index e11c2dd..cb3c090 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -783,10 +783,93 @@ long kvm_arch_vcpu_ioctl(struct file *filp, } } +#ifdef CONFIG_ARM +/** + * kvm_vm_ioctl_get_dirty_log - get and clear the log of dirty pages in a slot + * @kvm: kvm instance + * @log: slot id and address to which we copy the log + * + * We need to keep it in mind that VCPU threads can write to the bitmap + * concurrently. So, to avoid losing data, we keep the following order for + * each bit: + * + * 1. Take a snapshot of the bit and clear it if needed. + * 2. Write protect the corresponding page. + * 3. Flush TLB's if needed. + * 4. Copy the snapshot to the userspace. + * + * Between 2 and 3, the guest may write to the page using the remaining TLB + * entry. This is not a problem because the page will be reported dirty at + * step 4 using the snapshot taken before and step 3 ensures that successive + * writes will be logged for the next call. + */ +int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, + struct kvm_dirty_log *log) +{ + int r; + struct kvm_memory_slot *memslot; + unsigned long n, i; + unsigned long *dirty_bitmap; + unsigned long *dirty_bitmap_buffer; + bool is_dirty = false; + + mutex_lock(&kvm->slots_lock); + + r = -EINVAL; + if (log->slot >= KVM_USER_MEM_SLOTS) + goto out; + + memslot = id_to_memslot(kvm->memslots, log->slot); + + dirty_bitmap = memslot->dirty_bitmap; + r = -ENOENT; + if (!dirty_bitmap) + goto out; + + n = kvm_dirty_bitmap_bytes(memslot); + + dirty_bitmap_buffer = dirty_bitmap + n / sizeof(long); + memset(dirty_bitmap_buffer, 0, n); + + spin_lock(&kvm->mmu_lock); + + for (i = 0; i < n / sizeof(long); i++) { + unsigned long mask; + gfn_t offset; + + if (!dirty_bitmap[i]) + continue; + + is_dirty = true; + + mask = xchg(&dirty_bitmap[i], 0); + dirty_bitmap_buffer[i] = mask; + + offset = i * BITS_PER_LONG; + kvm_mmu_write_protect_pt_masked(kvm, memslot, offset, mask); + } + + spin_unlock(&kvm->mmu_lock); + + lockdep_assert_held(&kvm->slots_lock); + if (is_dirty) + kvm_tlb_flush_vmid(kvm); + + r = -EFAULT; + if (copy_to_user(log->dirty_bitmap, dirty_bitmap_buffer, n)) + goto out; + + r = 0; +out: + mutex_unlock(&kvm->slots_lock); + return r; +} +#else int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log) { return -EINVAL; } +#endif static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm, struct kvm_arm_device_addr *dev_addr) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 37edcbe..1caf511 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -888,6 +888,28 @@ void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot) kvm_tlb_flush_vmid(kvm); spin_unlock(&kvm->mmu_lock); } + +/** + * kvm_mmu_write_protected_pt_masked() - write protect dirty pages set in mask + * @kvm: The KVM pointer + * @slot: The memory slot associated with mask + * @gfn_offset: The gfn offset in memory slot + * @mask: The mask of dirty pages at offset 'gfn_offset' in this memory + * slot to be write protected + * + * Walks bits set in mask write protects the associated pte's. Caller must + * acquire kvm_mmu_lock. + */ +void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn_offset, unsigned long mask) +{ + phys_addr_t base_gfn = slot->base_gfn + gfn_offset; + phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT; + phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; + + stage2_wp_range(kvm, start, end); +} #endif static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,