From patchwork Fri Jul 25 00:56:06 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Smarduch X-Patchwork-Id: 4620691 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id C02DCC0514 for ; Fri, 25 Jul 2014 00:56:25 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BD4AD201E4 for ; Fri, 25 Jul 2014 00:56:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4F762201EC for ; Fri, 25 Jul 2014 00:56:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751138AbaGYA4T (ORCPT ); Thu, 24 Jul 2014 20:56:19 -0400 Received: from mailout1.w2.samsung.com ([211.189.100.11]:49371 "EHLO usmailout1.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750824AbaGYA4Q (ORCPT ); Thu, 24 Jul 2014 20:56:16 -0400 Received: from uscpsbgex3.samsung.com (u124.gpu85.samsung.co.kr [203.254.195.124]) by mailout1.w2.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0N98006Y3T9RSQ30@mailout1.w2.samsung.com> for kvm@vger.kernel.org; Thu, 24 Jul 2014 20:56:15 -0400 (EDT) X-AuditID: cbfec37c-b7fd06d000004f49-3a-53d1ab2ff3a2 Received: from usmmp1.samsung.com ( [203.254.195.77]) by uscpsbgex3.samsung.com (USCPEXMTA) with SMTP id E7.DE.20297.F2BA1D35; Thu, 24 Jul 2014 20:56:15 -0400 (EDT) Received: from sisasmtp.sisa.samsung.com ([105.144.21.116]) by usmmp1.samsung.com (Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit (built Aug 30 2012)) with ESMTP id <0N9800KBLT9Q6I80@usmmp1.samsung.com>; Thu, 24 Jul 2014 20:56:15 -0400 (EDT) Received: from mjsmard-530U3C-530U4C-532U3C.sisa.samsung.com (105.144.129.72) by SISAEX02SJ.sisa.samsung.com (105.144.21.116) with Microsoft SMTP Server (TLS) id 14.1.421.2; Thu, 24 Jul 2014 17:56:13 -0700 From: Mario Smarduch To: kvmarm@lists.cs.columbia.edu, marc.zyngier@arm.com, christoffer.dall@linaro.org, pbonzini@redhat.com, gleb@kernel.org, agraf@suse.de, xiantao.zhang@intel.com, borntraeger@de.ibm.com, cornelia.huck@de.ibm.com Cc: xiaoguangrong@linux.vnet.ibm.com, steve.capper@arm.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, jays.lee@samsung.com, sungjinn.chung@samsung.com, Mario Smarduch Subject: [PATCH v9 2/4] arm: ARMv7 dirty page logging inital mem region write protect (w/no huge PUD support) Date: Thu, 24 Jul 2014 17:56:06 -0700 Message-id: <1406249768-25315-3-git-send-email-m.smarduch@samsung.com> X-Mailer: git-send-email 1.7.9.5 In-reply-to: <1406249768-25315-1-git-send-email-m.smarduch@samsung.com> References: <1406249768-25315-1-git-send-email-m.smarduch@samsung.com> MIME-version: 1.0 Content-type: text/plain X-Originating-IP: [105.144.129.72] X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprIIsWRmVeSWpSXmKPExsVy+t9hX1391ReDDbqfcVicuPKP0WL6iu0s Fi9eA1nzmxsZLb78vM5ocf/qd0aLOVMLLT6eOs5usenxNVaLv3f+sVns3/aP1WLSm21MFh9m rGS06Pq+g9li4f+bjA78HmvmrWH0OPjoEJvH4j0vmTw2repk87hzbQ+bx4NDm1k8zm9aw+yx eUm9x/t9V9k8+rasYvTYfLra4/MmuQCeKC6blNSczLLUIn27BK6MhXf3sRdssapYuHkRUwPj dP0uRk4OCQETiQezT7FD2GISF+6tZ+ti5OIQEljGKPHn8E5GCKeXSWLf/s0sEM5FRokfp36C tbAJ6Ersv7eRHSQhInCTUeJC1zswh1ngGqPE1CV/GUGqhAUKJDqWXWbtYuTgYBFQlTg6ORsk zCvgJvF203Y2kLCEgILEnEk2IGFOAXeJFf8fgoWFgEpa52ZCVAtK/Jh8jwUkzCwgIfH8sxJI WAho3rabzxkhHlCSuDp3JssERqFZSDpmIXQsYGRaxShWWpxcUJyUnlphrFecmFtcmpeul5yf u4kREo01OxjvfbU5xCjAwajEw9tRfzFYiDWxrLgy9xCjBAezkgiv21ygEG9KYmVValF+fFFp TmrxIUYmDk6pBsao6Mnf3zTU8zrPqVCTDVrF6W3YHZNzO0o9hGc5h681B2/FjYrunaEuag4X 9ui/nra9IO6grpBk2kYuhp1pd69Pj+09rl726X0Lww+XLeuUmx8vezHt7HUuw8d5vxO/bc86 UsTz6qxXooeLZMmkiNS3fRPk3j+8ZutyZa8aV5t5zLZrJxKeb1ViKc5INNRiLipOBABm+X3D pAIAAA== Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Patch adds support for initial write protection VM memlsot. This patch series assumes that huge PUDs will not be used in 2nd stage tables. Signed-off-by: Mario Smarduch --- arch/arm/include/asm/kvm_host.h | 1 + arch/arm/include/asm/kvm_mmu.h | 20 ++++++ arch/arm/include/asm/pgtable-3level.h | 1 + arch/arm/kvm/arm.c | 9 +++ arch/arm/kvm/mmu.c | 128 +++++++++++++++++++++++++++++++++ 5 files changed, 159 insertions(+) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 042206f..6521a2d 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -231,5 +231,6 @@ int kvm_perf_teardown(void); u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid); int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value); void kvm_arch_flush_remote_tlbs(struct kvm *); +void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot); #endif /* __ARM_KVM_HOST_H__ */ diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 5cc0b0f..08ab5e8 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -114,6 +114,26 @@ static inline void kvm_set_s2pmd_writable(pmd_t *pmd) pmd_val(*pmd) |= L_PMD_S2_RDWR; } +static inline void kvm_set_s2pte_readonly(pte_t *pte) +{ + pte_val(*pte) = (pte_val(*pte) & ~L_PTE_S2_RDWR) | L_PTE_S2_RDONLY; +} + +static inline bool kvm_s2pte_readonly(pte_t *pte) +{ + return (pte_val(*pte) & L_PTE_S2_RDWR) == L_PTE_S2_RDONLY; +} + +static inline void kvm_set_s2pmd_readonly(pmd_t *pmd) +{ + pmd_val(*pmd) = (pmd_val(*pmd) & ~L_PMD_S2_RDWR) | L_PMD_S2_RDONLY; +} + +static inline bool kvm_s2pmd_readonly(pmd_t *pmd) +{ + return (pmd_val(*pmd) & L_PMD_S2_RDWR) == L_PMD_S2_RDONLY; +} + /* Open coded p*d_addr_end that can deal with 64bit addresses */ #define kvm_pgd_addr_end(addr, end) \ ({ u64 __boundary = ((addr) + PGDIR_SIZE) & PGDIR_MASK; \ diff --git a/arch/arm/include/asm/pgtable-3level.h b/arch/arm/include/asm/pgtable-3level.h index 85c60ad..d8bb40b 100644 --- a/arch/arm/include/asm/pgtable-3level.h +++ b/arch/arm/include/asm/pgtable-3level.h @@ -129,6 +129,7 @@ #define L_PTE_S2_RDONLY (_AT(pteval_t, 1) << 6) /* HAP[1] */ #define L_PTE_S2_RDWR (_AT(pteval_t, 3) << 6) /* HAP[2:1] */ +#define L_PMD_S2_RDONLY (_AT(pteval_t, 1) << 6) /* HAP[1] */ #define L_PMD_S2_RDWR (_AT(pmdval_t, 3) << 6) /* HAP[2:1] */ /* diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 3c82b37..e11c2dd 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -242,6 +242,15 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_memory_slot *old, enum kvm_mr_change change) { +#ifdef CONFIG_ARM + /* + * At this point memslot has been committed and there is an + * allocated dirty_bitmap[], dirty pages will be be tracked while the + * memory slot is write protected. + */ + if ((change != KVM_MR_DELETE) && (mem->flags & KVM_MEM_LOG_DIRTY_PAGES)) + kvm_mmu_wp_memory_region(kvm, mem->slot); +#endif } void kvm_arch_flush_shadow_all(struct kvm *kvm) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 35254c6..7bfc792 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -763,6 +763,134 @@ static bool transparent_hugepage_adjust(pfn_t *pfnp, phys_addr_t *ipap) return false; } +#ifdef CONFIG_ARM +/** + * stage2_wp_pte_range - write protect PTE range + * @pmd: pointer to pmd entry + * @addr: range start address + * @end: range end address + */ +static void stage2_wp_pte_range(pmd_t *pmd, phys_addr_t addr, phys_addr_t end) +{ + pte_t *pte; + + pte = pte_offset_kernel(pmd, addr); + do { + if (!pte_none(*pte)) { + if (!kvm_s2pte_readonly(pte)) + kvm_set_s2pte_readonly(pte); + } + } while (pte++, addr += PAGE_SIZE, addr != end); +} + +/** + * stage2_wp_pmd_range - write protect PMD range + * @pud: pointer to pud entry + * @addr: range start address + * @end: range end address + */ +static void stage2_wp_pmd_range(pud_t *pud, phys_addr_t addr, phys_addr_t end) +{ + pmd_t *pmd; + phys_addr_t next; + + pmd = pmd_offset(pud, addr); + + do { + next = kvm_pmd_addr_end(addr, end); + if (!pmd_none(*pmd)) { + if (kvm_pmd_huge(*pmd)) { + if (!kvm_s2pmd_readonly(pmd)) + kvm_set_s2pmd_readonly(pmd); + } else + stage2_wp_pte_range(pmd, addr, next); + + } + } while (pmd++, addr = next, addr != end); +} + +/** + * stage2_wp_pud_range - write protect PUD range + * @kvm: pointer to kvm structure + * @pud: pointer to pgd entry + * @addr: range start address + * @end: range end address + * + * While walking the PUD range huge PUD pages are ignored, in the future this + * may need to be revisited. Determine how to handle huge PUDs when logging + * of dirty pages is enabled. + */ +static void stage2_wp_pud_range(struct kvm *kvm, pgd_t *pgd, + phys_addr_t addr, phys_addr_t end) +{ + pud_t *pud; + phys_addr_t next; + + pud = pud_offset(pgd, addr); + do { + next = kvm_pud_addr_end(addr, end); + /* TODO: huge PUD not supported, revisit later */ + BUG_ON(pud_huge(*pud)); + if (!pud_none(*pud)) + stage2_wp_pmd_range(pud, addr, next); + } while (pud++, addr = next, addr != end); +} + +/** + * stage2_wp_range() - write protect stage2 memory region range + * @kvm: The KVM pointer + * @start: Start address of range + * &end: End address of range + */ +static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end) +{ + pgd_t *pgd; + phys_addr_t next; + + pgd = kvm->arch.pgd + pgd_index(addr); + do { + /* + * Release kvm_mmu_lock periodically if the memory region is + * large features like detect hung task, lock detector or lock + * dep may panic. In addition holding the lock this long will + * also starve other vCPUs. Applies to huge VM memory regions. + */ + if (need_resched() || spin_needbreak(&kvm->mmu_lock)) + cond_resched_lock(&kvm->mmu_lock); + + next = kvm_pgd_addr_end(addr, end); + if (pgd_present(*pgd)) + stage2_wp_pud_range(kvm, pgd, addr, next); + } while (pgd++, addr = next, addr != end); +} + +/** + * kvm_mmu_wp_memory_region() - write protect stage 2 entries for memory slot + * @kvm: The KVM pointer + * @slot: The memory slot to write protect + * + * Called to start logging dirty pages after memory region + * KVM_MEM_LOG_DIRTY_PAGES operation is called. After this function returns + * all present PMD and PTEs are write protected in the memory region. + * Afterwards read of dirty page log can be called. + * + * Acquires kvm_mmu_lock. Called with kvm->slots_lock mutex acquired, + * serializing operations for VM memory regions. + */ + +void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot) +{ + struct kvm_memory_slot *memslot = id_to_memslot(kvm->memslots, slot); + phys_addr_t start = memslot->base_gfn << PAGE_SHIFT; + phys_addr_t end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT; + + spin_lock(&kvm->mmu_lock); + stage2_wp_range(kvm, start, end); + kvm_flush_remote_tlbs(kvm); + spin_unlock(&kvm->mmu_lock); +} +#endif + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_memory_slot *memslot, unsigned long fault_status)