From patchwork Tue Apr 29 00:55:08 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Smarduch X-Patchwork-Id: 4084021 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 6A155BFF02 for ; Tue, 29 Apr 2014 00:58:25 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 8865D201EC for ; Tue, 29 Apr 2014 00:58:24 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B124C201E7 for ; Tue, 29 Apr 2014 00:58:23 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WewKk-0007Jc-6T; Tue, 29 Apr 2014 00:55:42 +0000 Received: from mailout4.w2.samsung.com ([211.189.100.14] helo=usmailout4.samsung.com) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WewKZ-0007EI-Uc for linux-arm-kernel@lists.infradead.org; Tue, 29 Apr 2014 00:55:32 +0000 Received: from uscpsbgex2.samsung.com (u123.gpu85.samsung.co.kr [203.254.195.123]) by usmailout4.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0N4R0097FP7X8EA0@usmailout4.samsung.com> for linux-arm-kernel@lists.infradead.org; Mon, 28 Apr 2014 20:55:10 -0400 (EDT) X-AuditID: cbfec37b-b7fc26d0000070ab-8a-535ef86def15 Received: from usmmp2.samsung.com ( [203.254.195.78]) by uscpsbgex2.samsung.com (USCPEXMTA) with SMTP id 30.B9.28843.D68FE535; Mon, 28 Apr 2014 20:55:09 -0400 (EDT) Received: from sisasmtp.sisa.samsung.com ([105.144.21.116]) by usmmp2.samsung.com (Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit (built Aug 30 2012)) with ESMTP id <0N4R005KUP7XZ530@usmmp2.samsung.com>; Mon, 28 Apr 2014 20:55:09 -0400 (EDT) Received: from [105.144.129.76] (105.144.129.76) by SISAEX02SJ.sisa.samsung.com (105.144.21.116) with Microsoft SMTP Server (TLS) id 14.1.421.2; Mon, 28 Apr 2014 17:55:08 -0700 Message-id: <535EF86C.7020506@samsung.com> Date: Mon, 28 Apr 2014 17:55:08 -0700 From: Mario Smarduch User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130804 Thunderbird/17.0.8 MIME-version: 1.0 To: "kvmarm@lists.cs.columbia.edu" , Marc Zyngier , "christoffer.dall@linaro.org" , Steve Capper Subject: [PATCH v4 4/5] add 2nd stage page fault handling during live migration X-Originating-IP: [105.144.129.76] X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrMLMWRmVeSWpSXmKPExsVy+t9hP93cH3HBBgc2mFi8eP2P0aL3/0VW i/tXvzNazJlaaPHx1HF2i02Pr7Fa/L3zj81izpkHLBaT3mxjsvgwYyWjA5fHmnlrGD1mNfSy edy5tofN4/ymNcwem5fUe/RtWcXo8XmTXAB7FJdNSmpOZllqkb5dAlfG/u9f2AueiVY8fb6S vYFxl2AXIyeHhICJxNSfm1khbDGJC/fWs3UxcnEICSxjlPj/ZDUzhNPLJLFpx1UmCGcLo0Tr 3GdsIC28AloSS+aeZASxWQRUJT7c/s8EYrMJ6Ersv7eRHcQWFYiQ+HN6HytEvaDEj8n3WEBs EYFzjBIHQHq5OJgFljFJvHg1DaxIWMBf4vWXi2ALmAXUJSbNW8QMYctLbF7zFswWAlq27eZz oGYOoLuVJFYfMZ/AKDgLyYpZSLpnIelewMi8ilGstDi5oDgpPbXCSK84Mbe4NC9dLzk/dxMj JGKqdzDe/WpziFGAg1GJh7cjJi5YiDWxrLgy9xCjBAezkgivbStQiDclsbIqtSg/vqg0J7X4 ECMTB6dUA6Oe3f5OxlCxLUv5pb54zktlfuZ3sOlTJfvVCMWLHPw3Tr2/w8/fwdrosNgsr2RG rsn2v3+reX7OYb8Vv0roes39Hxc47y3Iir2eO5lj9lp2uZUz1Ro7dxv2P15ne+VNFEOJn/9d 30N//tR+sDCeemilrrdwD2/CM/Ml65P9VSSK1y/TkA3dYqDEUpyRaKjFXFScCADOeYv0dgIA AA== X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140428_175532_062023_AECD825D X-CRM114-Status: GOOD ( 13.14 ) X-Spam-Score: -5.6 (-----) Cc: Peter Maydell , "kvm@vger.kernel.org" , "gavin.guo@canonical.com" , =?UTF-8?B?7J207KCV7ISd?= , =?UTF-8?B?7KCV7ISx7KeE?= , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch add support for handling 2nd stage page faults during migration, it disables faulting in huge pages, and splits up existing huge pages. Signed-off-by: Mario Smarduch --- arch/arm/kvm/mmu.c | 31 +++++++++++++++++++++++++++++-- 1 file changed, 29 insertions(+), 2 deletions(-) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 3442594..88f5503 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -978,6 +978,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; struct vm_area_struct *vma; pfn_t pfn; + bool migration_active; write_fault = kvm_is_write_fault(kvm_vcpu_get_hsr(vcpu)); if (fault_status == FSC_PERM && !write_fault) { @@ -1029,12 +1030,21 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, return -EFAULT; spin_lock(&kvm->mmu_lock); + /* place inside lock to prevent race condition when whole VM is being + * write proteced. Prevent race of huge page install when migration is + * active. + */ + migration_active = vcpu->kvm->arch.migration_in_progress; + if (mmu_notifier_retry(kvm, mmu_seq)) goto out_unlock; - if (!hugetlb && !force_pte) + + /* During migration don't rebuild huge pages */ + if (!hugetlb && !force_pte && !migration_active) hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa); - if (hugetlb) { + /* During migration don't install new huge pages */ + if (hugetlb && !migration_active) { pmd_t new_pmd = pfn_pmd(pfn, PAGE_S2); new_pmd = pmd_mkhuge(new_pmd); if (writable) { @@ -1046,6 +1056,21 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, } else { pte_t new_pte = pfn_pte(pfn, PAGE_S2); if (writable) { + /* First convert huge page pfn to normal 4k page pfn, + * while migration is in progress. + * Second in migration mode and rare case where + * splitting of huge pages fails check if pmd is + * mapping a huge page if it is then clear it so + * stage2_set_pte() can map in a small page. + */ + if (migration_active && hugetlb) { + pmd_t *pmd; + pfn += pte_index(fault_ipa); + new_pte = pfn_pte(pfn, PAGE_S2); + pmd = stage2_get_pmd(kvm, NULL, fault_ipa); + if (pmd && kvm_pmd_huge(*pmd)) + clear_pmd_entry(kvm, pmd, fault_ipa); + } kvm_set_s2pte_writable(&new_pte); kvm_set_pfn_dirty(pfn); } @@ -1053,6 +1078,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, false); } + if (writable) + mark_page_dirty(kvm, gfn); out_unlock: spin_unlock(&kvm->mmu_lock);