From patchwork Thu May 15 18:27:31 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Smarduch X-Patchwork-Id: 4185121 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id C2DA09F1C0 for ; Thu, 15 May 2014 18:31:54 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id EE7AF20220 for ; Thu, 15 May 2014 18:31:53 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1A440201CE for ; Thu, 15 May 2014 18:31:53 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Wl0PT-0002pQ-5o; Thu, 15 May 2014 18:29:39 +0000 Received: from mailout4.w2.samsung.com ([211.189.100.14] helo=usmailout4.samsung.com) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Wl0Od-0001dB-IZ for linux-arm-kernel@lists.infradead.org; Thu, 15 May 2014 18:28:48 +0000 Received: from uscpsbgex2.samsung.com (u123.gpu85.samsung.co.kr [203.254.195.123]) by usmailout4.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0N5M00C9EONDXM90@usmailout4.samsung.com> for linux-arm-kernel@lists.infradead.org; Thu, 15 May 2014 14:28:25 -0400 (EDT) X-AuditID: cbfec37b-b7fc26d0000070ab-14-537507498f93 Received: from usmmp2.samsung.com ( [203.254.195.78]) by uscpsbgex2.samsung.com (USCPEXMTA) with SMTP id A9.ED.28843.94705735; Thu, 15 May 2014 14:28:25 -0400 (EDT) Received: from sisasmtp.sisa.samsung.com ([105.144.21.116]) by usmmp2.samsung.com (Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit (built Aug 30 2012)) with ESMTP id <0N5M005CSOND6Y30@usmmp2.samsung.com>; Thu, 15 May 2014 14:28:25 -0400 (EDT) Received: from mjsmard-530U3C-530U4C-532U3C.sisa.samsung.com (105.144.129.76) by SISAEX02SJ.sisa.samsung.com (105.144.21.116) with Microsoft SMTP Server (TLS) id 14.1.421.2; Thu, 15 May 2014 11:28:24 -0700 From: Mario Smarduch To: kvmarm@lists.cs.columbia.edu, marc.zyngier@arm.com, christoffer.dall@linaro.org Subject: [PATCH v6 4/4] add 2nd stage page fault handling during live migration Date: Thu, 15 May 2014 11:27:31 -0700 Message-id: <1400178451-4984-5-git-send-email-m.smarduch@samsung.com> X-Mailer: git-send-email 1.7.9.5 In-reply-to: <1400178451-4984-1-git-send-email-m.smarduch@samsung.com> References: <1400178451-4984-1-git-send-email-m.smarduch@samsung.com> MIME-version: 1.0 X-Originating-IP: [105.144.129.76] X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrALMWRmVeSWpSXmKPExsVy+t9hP11P9tJgg4ktehYvXv9jtOj9f5HV 4v7V74wWc6YWWnw8dZzdYtPja6wWf+/8Y7OYc+YBi8WkN9uYLD7MWMnowOWxZt4aRo9ZDb1s Hneu7WHzOL9pDbPH5iX1Hn1bVjF6fN4kF8AexWWTkpqTWZZapG+XwJUx48cexoKPohVzJ3xn amC8LNjFyMkhIWAicXP+HEYIW0ziwr31bF2MXBxCAssYJTq7TjJBOL1MEqdfnmSEcC4ySpz8 sJ0dpIVNQFdi/72NYLaIQKjE9b+NYB3MAs8ZJX69fcMEkhAW8JeYPWk6C4jNIqAq8e7yC1YQ m1fAVeLJ4W1AzRxAuxUk5kyyAQlzCrhJ/Pl5HqxECKjkyvrpUOWCEj8m32MBKWcWkJB4/lkJ okRVYtvN54wQU5QkVh8xn8AoNAtJwyyEhgWMTKsYxUqLkwuKk9JTK4z0ihNzi0vz0vWS83M3 MUJipXoH492vNocYBTgYlXh4HcRKgoVYE8uKK3MPMUpwMCuJ8Hr8AgrxpiRWVqUW5ccXleak Fh9iZOLglGpg1Leveih55cz9WV/9Ra9/m3V8bRufWU4Rq+4Rt+kzvU+L/RL/OPG8yHL+7id3 shS9VMT+P1FT3at+2eDJtQMrUiP6kg0tJJrjmj9tOXHs+yKRKYKPeHuPMde/F2j+OHmf5K6i 6V/vcafFfn0ltvcB47vj2iUBC8rfWZvd81w4n8WK+5Dv2UrOtUosxRmJhlrMRcWJACOdQV5z AgAA X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140515_112847_730163_F48861E3 X-CRM114-Status: GOOD ( 12.33 ) X-Spam-Score: -5.7 (-----) Cc: peter.maydell@linaro.org, kvm@vger.kernel.org, steve.capper@arm.com, linux-arm-kernel@lists.infradead.org, jays.lee@samsung.com, sungjinn.chung@samsung.com, gavin.guo@canonical.com, Mario Smarduch X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch adds support for handling 2nd stage page faults during migration, it disables faulting in huge pages, and splits up existing huge pages. Signed-off-by: Mario Smarduch --- arch/arm/kvm/mmu.c | 36 ++++++++++++++++++++++++++++++++++-- 1 file changed, 34 insertions(+), 2 deletions(-) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index b939312..10e7bf6 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -1002,6 +1002,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; struct vm_area_struct *vma; pfn_t pfn; + bool migration_active; write_fault = kvm_is_write_fault(kvm_vcpu_get_hsr(vcpu)); if (fault_status == FSC_PERM && !write_fault) { @@ -1053,12 +1054,23 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, return -EFAULT; spin_lock(&kvm->mmu_lock); + + /* + * Place inside lock to prevent race condition when whole VM is being + * write proteced. Prevent race of huge page install when migration is + * active. + */ + migration_active = vcpu->kvm->arch.migration_in_progress; + if (mmu_notifier_retry(kvm, mmu_seq)) goto out_unlock; - if (!hugetlb && !force_pte) + + /* When migrating don't spend cycles coalescing huge pages */ + if (!hugetlb && !force_pte && !migration_active) hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa); - if (hugetlb) { + /* During migration don't install huge pages */ + if (hugetlb && !migration_active) { pmd_t new_pmd = pfn_pmd(pfn, PAGE_S2); new_pmd = pmd_mkhuge(new_pmd); if (writable) { @@ -1069,6 +1081,23 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd); } else { pte_t new_pte = pfn_pte(pfn, PAGE_S2); + + /* + * If pmd is mapping a huge page then split it up into + * small pages, when doing live migration. + */ + if (migration_active) { + pmd_t *pmd; + if (hugetlb) { + pfn += pte_index(fault_ipa); + gfn = fault_ipa >> PAGE_SHIFT; + } + new_pte = pfn_pte(pfn, PAGE_S2); + pmd = stage2_get_pmd(kvm, NULL, fault_ipa); + if (pmd && kvm_pmd_huge(*pmd)) + clear_pmd_entry(kvm, pmd, fault_ipa); + } + if (writable) { kvm_set_s2pte_writable(&new_pte); kvm_set_pfn_dirty(pfn); @@ -1077,6 +1106,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, false); } + /* Assuming 4k pages, set one bit/page in memslot dirty_bitmap[] */ + if (writable) + mark_page_dirty(kvm, gfn); out_unlock: spin_unlock(&kvm->mmu_lock);