From patchwork Thu May 8 00:40:16 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Smarduch X-Patchwork-Id: 4132781 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 69F86BFF02 for ; Thu, 8 May 2014 00:43:43 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 6DE272026F for ; Thu, 8 May 2014 00:43:42 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 94FE520253 for ; Thu, 8 May 2014 00:43:41 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WiCOU-0001bI-G9; Thu, 08 May 2014 00:41:02 +0000 Received: from mailout4.w2.samsung.com ([211.189.100.14] helo=usmailout4.samsung.com) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WiCOD-0001X9-OA for linux-arm-kernel@lists.infradead.org; Thu, 08 May 2014 00:40:46 +0000 Received: from uscpsbgex1.samsung.com (u122.gpu85.samsung.co.kr [203.254.195.122]) by usmailout4.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0N5800I22CJBFS80@usmailout4.samsung.com> for linux-arm-kernel@lists.infradead.org; Wed, 07 May 2014 20:40:23 -0400 (EDT) X-AuditID: cbfec37a-b7fd76d000006048-d8-536ad2774ac1 Received: from usmmp2.samsung.com ( [203.254.195.78]) by uscpsbgex1.samsung.com (USCPEXMTA) with SMTP id 28.B4.24648.772DA635; Wed, 07 May 2014 20:40:23 -0400 (EDT) Received: from sisasmtp.sisa.samsung.com ([105.144.21.116]) by usmmp2.samsung.com (Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit (built Aug 30 2012)) with ESMTP id <0N5800B14CJA2R40@usmmp2.samsung.com>; Wed, 07 May 2014 20:40:23 -0400 (EDT) Received: from mjsmard-530U3C-530U4C-532U3C.sisa.samsung.com (105.144.129.76) by SISAEX02SJ.sisa.samsung.com (105.144.21.116) with Microsoft SMTP Server (TLS) id 14.1.421.2; Wed, 07 May 2014 17:40:22 -0700 From: Mario Smarduch To: kvmarm@lists.cs.columbia.edu, marc.zyngier@arm.com, christoffer.dall@linaro.org Subject: [PATCH v5 4/4] add 2nd stage page fault handling during live migration Date: Wed, 07 May 2014 17:40:16 -0700 Message-id: <1399509616-4632-5-git-send-email-m.smarduch@samsung.com> X-Mailer: git-send-email 1.7.9.5 In-reply-to: <1399509616-4632-1-git-send-email-m.smarduch@samsung.com> References: <1399509616-4632-1-git-send-email-m.smarduch@samsung.com> MIME-version: 1.0 X-Originating-IP: [105.144.129.76] X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrHLMWRmVeSWpSXmKPExsVy+t9hP93yS1nBBhu7pSxevP7HaNH7/yKr xf2r3xkt5kwttPh46ji7xabH11gt/t75x2Yx58wDFotJb7YxWXyYsZLRgctjzbw1jB6zGnrZ PO5c28PmcX7TGmaPzUvqPfq2rGL0+LxJLoA9issmJTUnsyy1SN8ugStjQ9dX1oIG0Yppv5Mb GD8KdDFyckgImEhcWLiVFcIWk7hwbz1bFyMXh5DAMkaJafcXMkE4vUwS3bdfskI4FxklJjyd zQjSwiagK7H/3kZ2EFtEIFTi+t9GsA5mgeeMEr/evmECSQgL+Esc+ruGDcRmEVCVaL14iKWL kYODV8BVYupBQxBTQkBBYs4kG5AKTgE3iY03poONFAKqmLHiMFgnr4CgxI/J98A6mQUkJJ5/ VoIoUZXYdvM5I8QUJYnVR8wnMArNQtIwC6FhASPTKkax0uLkguKk9NQKQ73ixNzi0rx0veT8 3E2MkDip2sF456vNIUYBDkYlHl4Bx6xgIdbEsuLK3EOMEhzMSiK8/pOBQrwpiZVVqUX58UWl OanFhxiZODilGhgbI54FuX/tuO33etl8rrvxDYVtB4IurlisO2fJ60scj9sD+A18Dz4JOlWv /PZEwVFfXaFk3uMGa2fO/PfnD8NvOX3t1zMY+ybdVPKuq9URnx0ud9noxGO+ySwqj/aFux35 fdLCe/qj5OpzLXzrG15nTt7gaXc/yy/J7zWrc1uBWaCYfN2jMD8lluKMREMt5qLiRAB5A29w cQIAAA== X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140507_174045_880345_446F7823 X-CRM114-Status: GOOD ( 12.14 ) X-Spam-Score: -5.7 (-----) Cc: peter.maydell@linaro.org, kvm@vger.kernel.org, steve.capper@arm.com, linux-arm-kernel@lists.infradead.org, jays.lee@samsung.com, sungjinn.chung@samsung.com, gavin.guo@canonical.com, Mario Smarduch X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch adds support for handling 2nd stage page faults during migration, it disables faulting in huge pages, and splits up existing huge pages. Signed-off-by: Mario Smarduch --- arch/arm/kvm/mmu.c | 30 ++++++++++++++++++++++++++++-- 1 file changed, 28 insertions(+), 2 deletions(-) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 1458b6e..b0633dc 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -1034,6 +1034,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; struct vm_area_struct *vma; pfn_t pfn; + bool migration_active; write_fault = kvm_is_write_fault(kvm_vcpu_get_hsr(vcpu)); if (fault_status == FSC_PERM && !write_fault) { @@ -1085,12 +1086,22 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, return -EFAULT; spin_lock(&kvm->mmu_lock); + + /* place inside lock to prevent race condition when whole VM is being + * write proteced. Prevent race of huge page install when migration is + * active. + */ + migration_active = vcpu->kvm->arch.migration_in_progress; + if (mmu_notifier_retry(kvm, mmu_seq)) goto out_unlock; - if (!hugetlb && !force_pte) + + /* During migration no need rebuild huge pages */ + if (!hugetlb && !force_pte && !migration_active) hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa); - if (hugetlb) { + /* During migration don't install new huge pages */ + if (hugetlb && !migration_active) { pmd_t new_pmd = pfn_pmd(pfn, PAGE_S2); new_pmd = pmd_mkhuge(new_pmd); if (writable) { @@ -1102,6 +1113,19 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, } else { pte_t new_pte = pfn_pte(pfn, PAGE_S2); if (writable) { + /* First convert huge page pfn to small page pfn, + * while migration is in progress. + * Second if pmd is mapping a huge page then + * clear pmd so stage2_set_pte() can split the pmd. + */ + if (migration_active && hugetlb) { + pmd_t *pmd; + pfn += pte_index(fault_ipa); + new_pte = pfn_pte(pfn, PAGE_S2); + pmd = stage2_get_pmd(kvm, NULL, fault_ipa); + if (pmd && kvm_pmd_huge(*pmd)) + clear_pmd_entry(kvm, pmd, fault_ipa); + } kvm_set_s2pte_writable(&new_pte); kvm_set_pfn_dirty(pfn); } @@ -1109,6 +1133,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, false); } + if (writable) + mark_page_dirty(kvm, gfn); out_unlock: spin_unlock(&kvm->mmu_lock);