From patchwork Mon Sep 24 17:45:48 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Punit Agrawal X-Patchwork-Id: 10612763 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CA6B3157B for ; Mon, 24 Sep 2018 17:57:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BAD072928F for ; Mon, 24 Sep 2018 17:57:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AF33D2A1F6; Mon, 24 Sep 2018 17:57:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0A1CF2928F for ; Mon, 24 Sep 2018 17:57:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=h90ndp5MREmLyCj0VcHGEcVM0ivYsapsTaULRHETqcc=; b=V0/+hiYHuEQAz4BpqxN09PzjIM +E1sF4QEF7nlKbiH/8LFzI89ZAcNCS7Z7zEK70T4fQfJ0wbMQrksKEJhoVyJIELzkXu72Emoz3YH2 jGEoXcqk0hcwFviDrkCxIvizwtyt+/K22uo4kKNc72uf9fH/uVv9vUaISfKcDqusPdnmnUirjVbv+ 9YBIEbfdKXkrm6x2vS3orvJLZsHi/CTmWhJ1coj7Rzy5zol5N6YlotNcJGYGmihnZfUGlkPKN3sqR WxPUrE+PjeN9e3ajJXi+diUC9JiByjmjTglwK2/ro5uLDhhW2WzMRmrcA/9IXQ4hbKBsI+iE47Eoj OdhOAipg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1g4V6d-0006HQ-AH; Mon, 24 Sep 2018 17:57:11 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1g4Uwu-0000Fo-LY for linux-arm-kernel@lists.infradead.org; Mon, 24 Sep 2018 17:47:19 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7A83B1684; Mon, 24 Sep 2018 10:46:26 -0700 (PDT) Received: from localhost (e105922-lin.Emea.Arm.com [10.4.13.28]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1F9E33F5BD; Mon, 24 Sep 2018 10:46:25 -0700 (PDT) From: Punit Agrawal To: kvmarm@lists.cs.columbia.edu Subject: [PATCH v7 5/9] KVM: arm64: Support dirty page tracking for PUD hugepages Date: Mon, 24 Sep 2018 18:45:48 +0100 Message-Id: <20180924174552.8387-6-punit.agrawal@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180924174552.8387-1-punit.agrawal@arm.com> References: <20180924174552.8387-1-punit.agrawal@arm.com> X-ARM-No-Footer: FoSSMail X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180924_104708_784020_9CE9035A X-CRM114-Status: GOOD ( 15.35 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: marc.zyngier@arm.com, Catalin Marinas , Punit Agrawal , will.deacon@arm.com, linux-kernel@vger.kernel.org, Russell King , linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP In preparation for creating PUD hugepages at stage 2, add support for write protecting PUD hugepages when they are encountered. Write protecting guest tables is used to track dirty pages when migrating VMs. Also, provide trivial implementations of required kvm_s2pud_* helpers to allow sharing of code with arm32. Signed-off-by: Punit Agrawal Reviewed-by: Christoffer Dall Reviewed-by: Suzuki K Poulose Cc: Marc Zyngier Cc: Russell King Cc: Catalin Marinas Cc: Will Deacon --- arch/arm/include/asm/kvm_mmu.h | 15 +++++++++++++++ arch/arm64/include/asm/kvm_mmu.h | 10 ++++++++++ virt/kvm/arm/mmu.c | 11 +++++++---- 3 files changed, 32 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index e77212e53e77..9ec09f4cc284 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -87,6 +87,21 @@ void kvm_clear_hyp_idmap(void); #define kvm_pmd_mkhuge(pmd) pmd_mkhuge(pmd) +/* + * The following kvm_*pud*() functions are provided strictly to allow + * sharing code with arm64. They should never be called in practice. + */ +static inline void kvm_set_s2pud_readonly(pud_t *pud) +{ + BUG(); +} + +static inline bool kvm_s2pud_readonly(pud_t *pud) +{ + BUG(); + return false; +} + static inline pte_t kvm_s2pte_mkwrite(pte_t pte) { pte_val(pte) |= L_PTE_S2_RDWR; diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index baabea0cbb66..3cc342177474 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -251,6 +251,16 @@ static inline bool kvm_s2pmd_exec(pmd_t *pmdp) return !(READ_ONCE(pmd_val(*pmdp)) & PMD_S2_XN); } +static inline void kvm_set_s2pud_readonly(pud_t *pudp) +{ + kvm_set_s2pte_readonly((pte_t *)pudp); +} + +static inline bool kvm_s2pud_readonly(pud_t *pudp) +{ + return kvm_s2pte_readonly((pte_t *)pudp); +} + #define hyp_pte_table_empty(ptep) kvm_page_empty(ptep) #ifdef __PAGETABLE_PMD_FOLDED diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index 21079eb5bc15..9c48f2ca6583 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1347,9 +1347,12 @@ static void stage2_wp_puds(struct kvm *kvm, pgd_t *pgd, do { next = stage2_pud_addr_end(kvm, addr, end); if (!stage2_pud_none(kvm, *pud)) { - /* TODO:PUD not supported, revisit later if supported */ - BUG_ON(stage2_pud_huge(kvm, *pud)); - stage2_wp_pmds(kvm, pud, addr, next); + if (stage2_pud_huge(kvm, *pud)) { + if (!kvm_s2pud_readonly(pud)) + kvm_set_s2pud_readonly(pud); + } else { + stage2_wp_pmds(kvm, pud, addr, next); + } } } while (pud++, addr = next, addr != end); } @@ -1392,7 +1395,7 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end) * * Called to start logging dirty pages after memory region * KVM_MEM_LOG_DIRTY_PAGES operation is called. After this function returns - * all present PMD and PTEs are write protected in the memory region. + * all present PUD, PMD and PTEs are write protected in the memory region. * Afterwards read of dirty page log can be called. * * Acquires kvm_mmu_lock. Called with kvm->slots_lock mutex acquired,