From patchwork Wed Oct 31 17:57:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Punit Agrawal X-Patchwork-Id: 10663065 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B216013A4 for ; Wed, 31 Oct 2018 17:59:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A37432B4DC for ; Wed, 31 Oct 2018 17:59:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 97F4E2B4E0; Wed, 31 Oct 2018 17:59:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 171642B4E4 for ; Wed, 31 Oct 2018 17:59:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=81jNmrWawrBlKCrUkjOqHZ9aktlosM66ADfpXeKY1ZY=; b=klSNBQoilR27RM i6SoZMJcrRXSc6UUJ55GrzcngPKWM3jgqEOsx5iwQnQ9ZzLbRsJ53xPkL2YJl0dzGiRkgFWG2gobw xIQFtg/QbKnHDb/H6oo/rtEfAekxvT+t9M4yOfGEpTaOaMWYP5WygT7LuHMPEyNqv0C7IiQjiJwTw xbJl0Ken0sWtS84iuWxK78cYMNQTbwpB3V3/T3ua/BliMKiYM5aGiPWM8yEgTAYW6vXcExePpbGA0 1SzjYAe2Mx9gJJvG36NTQvk2f8tatIlQLh8ChlY7GTfYUAnggDbyK/3OuMfPRmPGd2wkfmwQs+LKN WEktM+Zj8Iu0H+SZ70aw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gHumT-0006BF-Mg; Wed, 31 Oct 2018 17:59:49 +0000 Received: from merlin.infradead.org ([2001:8b0:10b:1231::1]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1gHulZ-0005O3-Gw for linux-arm-kernel@bombadil.infradead.org; Wed, 31 Oct 2018 17:58:53 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=ao8tRTjHRXGmzyeEC03dO8tfWRD8iln/6XuVglX57Jg=; b=FYSnBOG4VyTnEh6jP3pElXSeDQ 15yTjhPb0UawPwp6gPf6b1vaXYz473BJAF3srsx/gYM0gyESIkzJJnagBR4tbQ50Ro78FrBMS2OCc m9Nv+jbY9UVpiCe+VUQ6grnXDpZPEBwOEVhGVvBBDVfYt/Vt21zNOtnW1IEEMmyzN+7n2AdESOsVy zKOaGIz7WFkrDk4ygGYJexc0TpIp56GyUcEIM5sUkFngjDy/vyPz5trHfLwbZRRkDnNRX6LVoUPkd G4iEJCd6jJMhKkDINnCdQiKM1j9shf/pjWb45YqEVM6N436iJtl44ZBb7ldD0a7dSVnvRHSkU80z9 HvIgylQw==; Received: from foss.arm.com ([217.140.101.70]) by merlin.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gHulW-0005wd-6g for linux-arm-kernel@lists.infradead.org; Wed, 31 Oct 2018 17:58:51 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DE72C15AB; Wed, 31 Oct 2018 10:58:33 -0700 (PDT) Received: from localhost (e105922-lin.cambridge.arm.com [10.1.197.25]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5ED563F6A8; Wed, 31 Oct 2018 10:58:33 -0700 (PDT) From: Punit Agrawal To: kvmarm@lists.cs.columbia.edu Subject: [PATCH v9 4/8] KVM: arm64: Support dirty page tracking for PUD hugepages Date: Wed, 31 Oct 2018 17:57:41 +0000 Message-Id: <20181031175745.18650-5-punit.agrawal@arm.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181031175745.18650-1-punit.agrawal@arm.com> References: <20181031175745.18650-1-punit.agrawal@arm.com> MIME-Version: 1.0 X-ARM-No-Footer: FoSSMail X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181031_135850_421295_29DA5E8F X-CRM114-Status: GOOD ( 15.87 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: suzuki.poulose@arm.com, marc.zyngier@arm.com, Catalin Marinas , Punit Agrawal , will.deacon@arm.com, linux-kernel@vger.kernel.org, Russell King , punitagrawal@gmail.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP In preparation for creating PUD hugepages at stage 2, add support for write protecting PUD hugepages when they are encountered. Write protecting guest tables is used to track dirty pages when migrating VMs. Also, provide trivial implementations of required kvm_s2pud_* helpers to allow sharing of code with arm32. Signed-off-by: Punit Agrawal Reviewed-by: Christoffer Dall Reviewed-by: Suzuki K Poulose Cc: Marc Zyngier Cc: Russell King Cc: Catalin Marinas Cc: Will Deacon --- arch/arm/include/asm/kvm_mmu.h | 15 +++++++++++++++ arch/arm64/include/asm/kvm_mmu.h | 10 ++++++++++ virt/kvm/arm/mmu.c | 11 +++++++---- 3 files changed, 32 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index e6eff8bf5d7f..37bf85d39607 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -87,6 +87,21 @@ void kvm_clear_hyp_idmap(void); #define kvm_pmd_mkhuge(pmd) pmd_mkhuge(pmd) +/* + * The following kvm_*pud*() functions are provided strictly to allow + * sharing code with arm64. They should never be called in practice. + */ +static inline void kvm_set_s2pud_readonly(pud_t *pud) +{ + BUG(); +} + +static inline bool kvm_s2pud_readonly(pud_t *pud) +{ + BUG(); + return false; +} + static inline pte_t kvm_s2pte_mkwrite(pte_t pte) { pte_val(pte) |= L_PTE_S2_RDWR; diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 13d482710292..8da6d1b2a196 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -251,6 +251,16 @@ static inline bool kvm_s2pmd_exec(pmd_t *pmdp) return !(READ_ONCE(pmd_val(*pmdp)) & PMD_S2_XN); } +static inline void kvm_set_s2pud_readonly(pud_t *pudp) +{ + kvm_set_s2pte_readonly((pte_t *)pudp); +} + +static inline bool kvm_s2pud_readonly(pud_t *pudp) +{ + return kvm_s2pte_readonly((pte_t *)pudp); +} + #define hyp_pte_table_empty(ptep) kvm_page_empty(ptep) #ifdef __PAGETABLE_PMD_FOLDED diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index fb5325f7a1ac..1c669c3c1208 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1347,9 +1347,12 @@ static void stage2_wp_puds(struct kvm *kvm, pgd_t *pgd, do { next = stage2_pud_addr_end(kvm, addr, end); if (!stage2_pud_none(kvm, *pud)) { - /* TODO:PUD not supported, revisit later if supported */ - BUG_ON(stage2_pud_huge(kvm, *pud)); - stage2_wp_pmds(kvm, pud, addr, next); + if (stage2_pud_huge(kvm, *pud)) { + if (!kvm_s2pud_readonly(pud)) + kvm_set_s2pud_readonly(pud); + } else { + stage2_wp_pmds(kvm, pud, addr, next); + } } } while (pud++, addr = next, addr != end); } @@ -1392,7 +1395,7 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end) * * Called to start logging dirty pages after memory region * KVM_MEM_LOG_DIRTY_PAGES operation is called. After this function returns - * all present PMD and PTEs are write protected in the memory region. + * all present PUD, PMD and PTEs are write protected in the memory region. * Afterwards read of dirty page log can be called. * * Acquires kvm_mmu_lock. Called with kvm->slots_lock mutex acquired,