From patchwork Tue Jun 16 09:35:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11607005 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4ADF213A0 for ; Tue, 16 Jun 2020 09:38:59 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F0AE72074D for ; Tue, 16 Jun 2020 09:38:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Ox6wQPGr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F0AE72074D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=/wg/y2Z6hcb1J7Q0Oz58UVEYt2JmLOwDvQszTpLYvxQ=; b=Ox6wQPGrqhX9QB 8IS9ZLl6zYaIhumaqj0KW1D+vZMODU/l7i3X3lPR/od1vNj5IoOCgQX7Cbah3ls3uZbuQfeL/g5H7 TdKbJTbvPdMPB615vF9rIFOjMCVeP0gZ5AIfChvv/TRcoqop+2wIRJOUnwYXuLs0Y6v9BG+CXbwZ1 CUrUDsYDDaS9Jrp+X1ePhBLLbZ2kxwFzZKfWMBJhMX/mGw8IZQqY8ZaY1/WbWwJ3+Gjs5FdYLU0Cr vm5bVjn7WkXm5qEE5Ue7rIKar2G0st2k60tLfhndTQjVwePcuEjj8yP2xkfojkL9dmpagYuER5pIh e2ASrb8slOt+HKplrQzw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl83Q-0003u9-UP; Tue, 16 Jun 2020 09:38:52 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl811-0001m3-LR for linux-arm-kernel@lists.infradead.org; Tue, 16 Jun 2020 09:36:29 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id B02EF4A35D21C942F8BC; Tue, 16 Jun 2020 17:36:13 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.487.0; Tue, 16 Jun 2020 17:36:04 +0800 From: Keqian Zhu To: , , , Subject: [PATCH 01/12] KVM: arm64: Add some basic functions to support hw DBM Date: Tue, 16 Jun 2020 17:35:42 +0800 Message-ID: <20200616093553.27512-2-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200616093553.27512-1-zhukeqian1@huawei.com> References: <20200616093553.27512-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200616_023623_882890_A6A9214D X-CRM114-Status: UNSURE ( 8.97 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.190 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.190 listed in wl.mailspike.net] -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 SPF_PASS SPF: sender matches SPF record 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Keqian Zhu , Sean Christopherson , liangpeng10@huawei.com, Alexios Zavras , zhengxiang9@huawei.com, Mark Brown , James Morse , Marc Zyngier , wanghaibin.wang@huawei.com, Thomas Gleixner , Will Deacon , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Prepare some basic functions to support hardware DBM for PTEs. Signed-off-by: Keqian Zhu Signed-off-by: Peng Liang --- arch/arm64/include/asm/kvm_mmu.h | 36 ++++++++++++++++++++++++++++++++ 1 file changed, 36 insertions(+) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index b12bfc1f051a..e0ee6e23d626 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -265,6 +265,42 @@ static inline bool kvm_s2pud_young(pud_t pud) return pud_young(pud); } +#ifdef CONFIG_ARM64_HW_AFDBM +static inline bool kvm_hw_dbm_enabled(void) +{ + return !!(read_sysreg(vtcr_el2) & VTCR_EL2_HD); +} + +static inline void kvm_set_s2pte_dbm(pte_t *ptep) +{ + pteval_t old_pteval, pteval; + + pteval = READ_ONCE(pte_val(*ptep)); + do { + old_pteval = pteval; + pteval |= PTE_DBM; + pteval = cmpxchg_relaxed(&pte_val(*ptep), old_pteval, pteval); + } while (pteval != old_pteval); +} + +static inline void kvm_clear_s2pte_dbm(pte_t *ptep) +{ + pteval_t old_pteval, pteval; + + pteval = READ_ONCE(pte_val(*ptep)); + do { + old_pteval = pteval; + pteval &= ~PTE_DBM; + pteval = cmpxchg_relaxed(&pte_val(*ptep), old_pteval, pteval); + } while (pteval != old_pteval); +} + +static inline bool kvm_s2pte_dbm(pte_t *ptep) +{ + return !!(READ_ONCE(pte_val(*ptep)) & PTE_DBM); +} +#endif /* CONFIG_ARM64_HW_AFDBM */ + #define hyp_pte_table_empty(ptep) kvm_page_empty(ptep) #ifdef __PAGETABLE_PMD_FOLDED From patchwork Tue Jun 16 09:35:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11606993 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DA83213A0 for ; Tue, 16 Jun 2020 09:37:38 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B907D20767 for ; Tue, 16 Jun 2020 09:37:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="l+CWwuDd" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B907D20767 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=PcQnqbQON2AXPMH/15iq/VQJT/t7KxRI3KL7jhre7Oc=; b=l+CWwuDdDksP0g hrreUiHVspTjvTta9fFcWEF9mbswdy+NFGBZs0xJ6wE6j2t7auMUMPTw9cE/4Lh8Lb9BEhX44LpFY En75iZg+uRYZK7vIcaoo5zXxDwehLwD06M05RBvE0jH9Hto4C/uL7zTmwvK67aP3vcKo40BTibZTV 71dlzMRs6Tc8A9OHg9vuzKT+sPGX+dsHfSHalfdu752krYgdqqI0QtN0t4OV6XeGlK06T0wdZvYhK 6xUvPzHWH4ZnBslam11zi7GedJxZW//j+SFIk9oZrb1Y6c8iey1ujKXuNf2TckTdNGOiNFCRQ++wM yNzdiHdYsZR1QcDn7t3g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl82A-0002gj-Rq; Tue, 16 Jun 2020 09:37:34 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl810-0001m1-3w for linux-arm-kernel@lists.infradead.org; Tue, 16 Jun 2020 09:36:27 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id B8F055F8BBC2B2F531C4; Tue, 16 Jun 2020 17:36:13 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.487.0; Tue, 16 Jun 2020 17:36:05 +0800 From: Keqian Zhu To: , , , Subject: [PATCH 02/12] KVM: arm64: Modify stage2 young mechanism to support hw DBM Date: Tue, 16 Jun 2020 17:35:43 +0800 Message-ID: <20200616093553.27512-3-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200616093553.27512-1-zhukeqian1@huawei.com> References: <20200616093553.27512-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200616_023622_364851_0B9084B0 X-CRM114-Status: GOOD ( 12.85 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.190 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.190 listed in wl.mailspike.net] -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 SPF_PASS SPF: sender matches SPF record 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Keqian Zhu , Sean Christopherson , liangpeng10@huawei.com, Alexios Zavras , zhengxiang9@huawei.com, Mark Brown , James Morse , Marc Zyngier , wanghaibin.wang@huawei.com, Thomas Gleixner , Will Deacon , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Marking PTs young (set AF bit) should be atomic to avoid cover dirty status set by hardware. Signed-off-by: Keqian Zhu --- arch/arm64/include/asm/kvm_mmu.h | 32 ++++++++++++++++++++++---------- arch/arm64/kvm/mmu.c | 15 ++++++++------- 2 files changed, 30 insertions(+), 17 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index e0ee6e23d626..51af71505fbc 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -215,6 +215,18 @@ static inline void kvm_set_s2pte_readonly(pte_t *ptep) } while (pteval != old_pteval); } +static inline void kvm_set_s2pte_young(pte_t *ptep) +{ + pteval_t old_pteval, pteval; + + pteval = READ_ONCE(pte_val(*ptep)); + do { + old_pteval = pteval; + pteval |= PTE_AF; + pteval = cmpxchg_relaxed(&pte_val(*ptep), old_pteval, pteval); + } while (pteval != old_pteval); +} + static inline bool kvm_s2pte_readonly(pte_t *ptep) { return (READ_ONCE(pte_val(*ptep)) & PTE_S2_RDWR) == PTE_S2_RDONLY; @@ -230,6 +242,11 @@ static inline void kvm_set_s2pmd_readonly(pmd_t *pmdp) kvm_set_s2pte_readonly((pte_t *)pmdp); } +static inline void kvm_set_s2pmd_young(pmd_t *pmdp) +{ + kvm_set_s2pte_young((pte_t *)pmdp); +} + static inline bool kvm_s2pmd_readonly(pmd_t *pmdp) { return kvm_s2pte_readonly((pte_t *)pmdp); @@ -245,6 +262,11 @@ static inline void kvm_set_s2pud_readonly(pud_t *pudp) kvm_set_s2pte_readonly((pte_t *)pudp); } +static inline void kvm_set_s2pud_young(pud_t *pudp) +{ + kvm_set_s2pte_young((pte_t *)pudp); +} + static inline bool kvm_s2pud_readonly(pud_t *pudp) { return kvm_s2pte_readonly((pte_t *)pudp); @@ -255,16 +277,6 @@ static inline bool kvm_s2pud_exec(pud_t *pudp) return !(READ_ONCE(pud_val(*pudp)) & PUD_S2_XN); } -static inline pud_t kvm_s2pud_mkyoung(pud_t pud) -{ - return pud_mkyoung(pud); -} - -static inline bool kvm_s2pud_young(pud_t pud) -{ - return pud_young(pud); -} - #ifdef CONFIG_ARM64_HW_AFDBM static inline bool kvm_hw_dbm_enabled(void) { diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 8c0035cab6b6..5ad87bce23c0 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -2008,8 +2008,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * Resolve the access fault by making the page young again. * Note that because the faulting entry is guaranteed not to be * cached in the TLB, we don't need to invalidate anything. - * Only the HW Access Flag updates are supported for Stage 2 (no DBM), - * so there is no need for atomic (pte|pmd)_mkyoung operations. + * + * Note: Both DBM and HW AF updates are supported for Stage2, so + * young operations should be atomic. */ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) { @@ -2027,15 +2028,15 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) goto out; if (pud) { /* HugeTLB */ - *pud = kvm_s2pud_mkyoung(*pud); + kvm_set_s2pud_young(pud); pfn = kvm_pud_pfn(*pud); pfn_valid = true; } else if (pmd) { /* THP, HugeTLB */ - *pmd = pmd_mkyoung(*pmd); + kvm_set_s2pmd_young(pmd); pfn = pmd_pfn(*pmd); pfn_valid = true; - } else { - *pte = pte_mkyoung(*pte); /* Just a page... */ + } else { /* Just a page... */ + kvm_set_s2pte_young(pte); pfn = pte_pfn(*pte); pfn_valid = true; } @@ -2280,7 +2281,7 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void * return 0; if (pud) - return kvm_s2pud_young(*pud); + return pud_young(*pud); else if (pmd) return pmd_young(*pmd); else From patchwork Tue Jun 16 09:35:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11607009 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5C02A13A0 for ; Tue, 16 Jun 2020 09:39:12 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2C40E2074D for ; Tue, 16 Jun 2020 09:39:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Bl7vhWUi" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2C40E2074D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=CLR8Ut2ueQTAzHE+aM/x4p9Josd1yutYazOkwoIKcFE=; b=Bl7vhWUismHaMq PF4176tZqlOYYMiE321DKqbMBtXWwKIxwzROEl0KMYGjvAlf000UVjY40onZF7BELJS9clLzQgQmb 5pDcKnSH5kM8dGK7DeGsD9jYEyfvITONNp0wtosv4VFNo8Vtailf/bKUd/tFrsERf+X8oA71uTqAd YmNF7x1TVWr4iWkzFbNAoUYOUrBCMBLvYm7powVxP4+zsNnyOkzp4y2/7CI5V5CCvOsVSvireca65 gUXNvNhAs2kyLLK23KvMgTTRPH2+WZmQfpzWxNbSU2HdYk9mFhAeksJw1mQXWn5bqPFtkCsSbIUGA ArxtB6DXEUilPxbHkzkw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl83g-00048S-ES; Tue, 16 Jun 2020 09:39:08 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl811-0001m2-M7 for linux-arm-kernel@lists.infradead.org; Tue, 16 Jun 2020 09:36:32 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 9F912D563F9CEB625A78; Tue, 16 Jun 2020 17:36:13 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.487.0; Tue, 16 Jun 2020 17:36:06 +0800 From: Keqian Zhu To: , , , Subject: [PATCH 03/12] KVM: arm64: Report hardware dirty status of stage2 PTE if coverred Date: Tue, 16 Jun 2020 17:35:44 +0800 Message-ID: <20200616093553.27512-4-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200616093553.27512-1-zhukeqian1@huawei.com> References: <20200616093553.27512-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200616_023623_912715_E119AA81 X-CRM114-Status: GOOD ( 11.26 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.190 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.190 listed in wl.mailspike.net] -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 SPF_PASS SPF: sender matches SPF record 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Keqian Zhu , Sean Christopherson , liangpeng10@huawei.com, Alexios Zavras , zhengxiang9@huawei.com, Mark Brown , James Morse , Marc Zyngier , wanghaibin.wang@huawei.com, Thomas Gleixner , Will Deacon , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org kvm_set_pte is called to replace a target PTE with a desired one. We always do this without changing the desired one, but if dirty status set by hardware is coverred, let caller know it. Signed-off-by: Keqian Zhu --- arch/arm64/kvm/mmu.c | 36 +++++++++++++++++++++++++++++++++++- 1 file changed, 35 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 5ad87bce23c0..27407153121b 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -194,11 +194,45 @@ static void clear_stage2_pmd_entry(struct kvm *kvm, pmd_t *pmd, phys_addr_t addr put_page(virt_to_page(pmd)); } -static inline void kvm_set_pte(pte_t *ptep, pte_t new_pte) +#ifdef CONFIG_ARM64_HW_AFDBM +/** + * @ret: true if dirty status set by hardware is coverred. + */ +static bool kvm_set_pte(pte_t *ptep, pte_t new_pte) +{ + pteval_t old_pteval, new_pteval, pteval; + bool old_logging, new_no_write; + + old_logging = kvm_hw_dbm_enabled() && !pte_none(*ptep) && + kvm_s2pte_dbm(ptep); + new_no_write = pte_none(new_pte) || kvm_s2pte_readonly(&new_pte); + + if (!old_logging || !new_no_write) { + WRITE_ONCE(*ptep, new_pte); + dsb(ishst); + return false; + } + + new_pteval = pte_val(new_pte); + pteval = READ_ONCE(pte_val(*ptep)); + do { + old_pteval = pteval; + pteval = cmpxchg_relaxed(&pte_val(*ptep), old_pteval, new_pteval); + } while (pteval != old_pteval); + + return !kvm_s2pte_readonly(&__pte(pteval)); +} +#else +/** + * @ret: true if dirty status set by hardware is coverred. + */ +static inline bool kvm_set_pte(pte_t *ptep, pte_t new_pte) { WRITE_ONCE(*ptep, new_pte); dsb(ishst); + return false; } +#endif /* CONFIG_ARM64_HW_AFDBM */ static inline void kvm_set_pmd(pmd_t *pmdp, pmd_t new_pmd) { From patchwork Tue Jun 16 09:35:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11607017 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 21E0C13A0 for ; Tue, 16 Jun 2020 09:40:12 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F2F9B2074D for ; Tue, 16 Jun 2020 09:40:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="nmVeBfOA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F2F9B2074D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5CNpn8ehmIHmq9qEb3qqyZDzOIVMAI91zL3kFNinWmk=; b=nmVeBfOAkFxvmS KdeLxmIjXmBTx5I91nCXiqZEUMADBwJI75B9BXEoCFICFCGaQMp9iIjfgKhhlKWkiS9U6ANP4gOX1 5Q5ZWEkjNeRKWoGtlXCeznZB+Euz0VnX4RjhUhKwCFUjXJSoFulk86O6TSN5/2Gz8PkQOte+N4qHX KHvtAgL8gY4CEE/0MYmwgdcTM9wR3SUHAzQkqOyUyvcF8nZq0eRX9dabWE6LTjTYncZbzWh/JqZo2 C7Vk/xHfjUxTNn6zD0wkrMNF1jZaqdJ59uWV48QXmwGH7arKoolKisnJlCFBOHeAHd3LeYBT0W97R 8pzhx4lmGJvl1JlB0qGg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl84W-0004rb-FP; Tue, 16 Jun 2020 09:40:00 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl817-0001w8-CF for linux-arm-kernel@lists.infradead.org; Tue, 16 Jun 2020 09:36:36 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id C14F1D5C778EF500B1DF; Tue, 16 Jun 2020 17:36:13 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.487.0; Tue, 16 Jun 2020 17:36:07 +0800 From: Keqian Zhu To: , , , Subject: [PATCH 04/12] KVM: arm64: Support clear DBM bit for PTEs Date: Tue, 16 Jun 2020 17:35:45 +0800 Message-ID: <20200616093553.27512-5-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200616093553.27512-1-zhukeqian1@huawei.com> References: <20200616093553.27512-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200616_023629_597523_2FCDCB06 X-CRM114-Status: GOOD ( 10.93 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.190 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.190 listed in wl.mailspike.net] -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 SPF_PASS SPF: sender matches SPF record 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Keqian Zhu , Sean Christopherson , liangpeng10@huawei.com, Alexios Zavras , zhengxiang9@huawei.com, Mark Brown , James Morse , Marc Zyngier , wanghaibin.wang@huawei.com, Thomas Gleixner , Will Deacon , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This supports clear DBM bit for PTEs, to realize dynamic enable of hardware DBM. Signed-off-by: Keqian Zhu --- arch/arm64/include/asm/kvm_host.h | 2 + arch/arm64/kvm/mmu.c | 151 ++++++++++++++++++++++++++++++ 2 files changed, 153 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index c3e6fcc664b1..9ea2dcfd609c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -480,6 +480,8 @@ u64 __kvm_call_hyp(void *hypfn, ...); void force_vm_exit(const cpumask_t *mask); void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot); +void kvm_mmu_clear_dbm(struct kvm *kvm, struct kvm_memory_slot *memslot); +void kvm_mmu_clear_dbm_all(struct kvm *kvm); int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, int exception_index); diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 27407153121b..f08b0fbca0a0 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -2446,6 +2446,157 @@ int kvm_mmu_init(void) return err; } +#ifdef CONFIG_ARM64_HW_AFDBM +/** + * stage2_clear_dbm_ptes() - clear DBM bit from PMD range + * @pmd: pointer to pmd entry + * @addr: range start address + * @end: range end address + */ +static void stage2_clear_dbm_ptes(pmd_t *pmd, phys_addr_t addr, + phys_addr_t end) +{ + pte_t *pte; + + pte = pte_offset_kernel(pmd, addr); + do { + if (!pte_none(*pte) && kvm_s2pte_dbm(pte)) + kvm_clear_s2pte_dbm(pte); + } while (pte++, addr += PAGE_SIZE, addr != end); +} + +/** + * stage2_clear_dbm_pmds() - clear DBM bit from PUD range + * @kvm: The KVM pointer + * @pud: pointer to pud entry + * @addr: range start address + * @end: range end address + */ +static void stage2_clear_dbm_pmds(struct kvm *kvm, pud_t *pud, + phys_addr_t addr, phys_addr_t end) +{ + pmd_t *pmd; + phys_addr_t next; + + pmd = stage2_pmd_offset(kvm, pud, addr); + do { + next = stage2_pmd_addr_end(kvm, addr, end); + if (!pmd_none(*pmd) && !pmd_thp_or_huge(*pmd)) + stage2_clear_dbm_ptes(pmd, addr, next); + } while (pmd++, addr = next, addr != end); +} + +/** + * stage2_clear_dbm_puds() - clear DBM bit from P4D range + * @kvm: The KVM pointer + * @pgd: pointer to pgd entry + * @addr: range start address + * @end: range end address + */ +static void stage2_clear_dbm_puds(struct kvm *kvm, p4d_t *p4d, + phys_addr_t addr, phys_addr_t end) +{ + pud_t *pud; + phys_addr_t next; + + pud = stage2_pud_offset(kvm, p4d, addr); + do { + next = stage2_pud_addr_end(kvm, addr, end); + if (!stage2_pud_none(kvm, *pud) && !stage2_pud_huge(kvm, *pud)) + stage2_clear_dbm_pmds(kvm, pud, addr, next); + } while (pud++, addr = next, addr != end); +} + +/** + * stage2_clear_dbm_p4ds() - clear DBM bit from PGD range + * @kvm: The KVM pointer + * @pgd: pointer to pgd entry + * @addr: range start address + * @end: range end address + */ +static void stage2_clear_dbm_p4ds(struct kvm *kvm, pgd_t *pgd, + phys_addr_t addr, phys_addr_t end) +{ + p4d_t *p4d; + phys_addr_t next; + + p4d = stage2_p4d_offset(kvm, pgd, addr); + do { + next = stage2_p4d_addr_end(kvm, addr, end); + if (!stage2_p4d_none(kvm, *p4d)) + stage2_clear_dbm_puds(kvm, p4d, addr, next); + } while (p4d++, addr = next, addr != end); +} + +/** + * stage2_clear_dbm_range() - clear DBM bit from stage2 memory + * region range + * @kvm: The KVM pointer + * @addr: Start address of range + * @end: End address of range + */ +static void stage2_clear_dbm_range(struct kvm *kvm, phys_addr_t addr, + phys_addr_t end) +{ + pgd_t *pgd; + phys_addr_t next; + + pgd = kvm->arch.pgd + stage2_pgd_index(kvm, addr); + do { + cond_resched_lock(&kvm->mmu_lock); + if (!READ_ONCE(kvm->arch.pgd)) + break; + next = stage2_pgd_addr_end(kvm, addr, end); + if (stage2_pgd_present(kvm, *pgd)) + stage2_clear_dbm_p4ds(kvm, pgd, addr, next); + } while (pgd++, addr = next, addr != end); +} + +/** + * kvm_mmu_clear_dbm() - clear DBM bit from stage2 PTEs for memory slot + * @kvm: The KVM pointer + * @slot: The memory slot to clear DBM bit + * + * After this function returns, DBM bit of all block or page descriptors + * is cleared. + * + * Acquires kvm_mmu_lock. Called with kvm->slots_lock mutex acquired, + * serializing operations for VM memory regions. + */ +void kvm_mmu_clear_dbm(struct kvm *kvm, struct kvm_memory_slot *memslot) +{ + phys_addr_t start = memslot->base_gfn << PAGE_SHIFT; + phys_addr_t end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT; + + spin_lock(&kvm->mmu_lock); + stage2_clear_dbm_range(kvm, start, end); + spin_unlock(&kvm->mmu_lock); + kvm_flush_remote_tlbs(kvm); +} + +/** + * kvm_mmu_clear_dbm_all() - clear DBM bit from stage2 PTEs for whole VM + * @kvm: The KVM pointer + * + * Called with kvm->slots_lock mutex acquired. + */ +void kvm_mmu_clear_dbm_all(struct kvm *kvm) +{ + struct kvm_memslots *slots = kvm_memslots(kvm); + struct kvm_memory_slot *memslots = slots->memslots; + struct kvm_memory_slot *memslot; + int slot; + + if (unlikely(!slots->used_slots)) + return; + + for (slot = 0; slot < slots->used_slots; slot++) { + memslot = &memslots[slot]; + kvm_mmu_clear_dbm(kvm, memslot); + } +} +#endif /* CONFIG_ARM64_HW_AFDBM */ + void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, struct kvm_memory_slot *old, From patchwork Tue Jun 16 09:35:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11606999 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0F89D13A0 for ; Tue, 16 Jun 2020 09:38:03 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D659D20786 for ; Tue, 16 Jun 2020 09:38:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="aGcPrm/O" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D659D20786 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=bhyIEDNbfvZelpxxIDpVfgXLVPwlgxuatqS2eZQz9fE=; b=aGcPrm/ObYw7fI eH6/Kc0Jxl+wtUtMQT8O3UGTleJ5JnvygbI/737JOk/Q0EEYKKMuV9lpskKuAN6f8svco47qC+25C EqSkPICgcTl7BZf0T8YMR0/wGinVccVwlM3KF+B3x/tFMyqAY6R9ERR9jpUf2fMEEkFYCK683eFQJ ZkYKOpuXAwpDAI2zGhEE61ZTtiEEwrjDhsQOl2/QpiSsHaG8dPcgT47NbN+8Yz2FhgRzDozf7BriY vhBfBl5TQnAcR8DtUvqPF31MyMSXf+9MXlj/pRa2+eEt2bEA5JdDX6MqsSIY5b8JeFQuOs8nWEz74 SMh2yrw5aLUPIPleoG/w==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl82a-00037Y-LA; Tue, 16 Jun 2020 09:38:00 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl810-0001m4-Pr for linux-arm-kernel@lists.infradead.org; Tue, 16 Jun 2020 09:36:29 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id A7F67EB29739727ABAE9; Tue, 16 Jun 2020 17:36:13 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.487.0; Tue, 16 Jun 2020 17:36:07 +0800 From: Keqian Zhu To: , , , Subject: [PATCH 05/12] KVM: arm64: Add KVM_CAP_ARM_HW_DIRTY_LOG capability Date: Tue, 16 Jun 2020 17:35:46 +0800 Message-ID: <20200616093553.27512-6-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200616093553.27512-1-zhukeqian1@huawei.com> References: <20200616093553.27512-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200616_023623_030349_A8F92703 X-CRM114-Status: GOOD ( 12.03 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.190 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.190 listed in wl.mailspike.net] -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 SPF_PASS SPF: sender matches SPF record 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Keqian Zhu , Sean Christopherson , liangpeng10@huawei.com, Alexios Zavras , zhengxiang9@huawei.com, Mark Brown , James Morse , Marc Zyngier , wanghaibin.wang@huawei.com, Thomas Gleixner , Will Deacon , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org For that using arm64 DBM to log dirty pages has the side effect of long time dirty log sync, we should give userspace opportunity to enable or disable this feature, to realize some policy. Signed-off-by: Keqian Zhu --- arch/arm64/include/asm/kvm_host.h | 7 +++++++ arch/arm64/kvm/arm.c | 10 ++++++++++ arch/arm64/kvm/reset.c | 5 +++++ include/uapi/linux/kvm.h | 1 + tools/include/uapi/linux/kvm.h | 1 + 5 files changed, 24 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 9ea2dcfd609c..2bc3256759e3 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -95,6 +95,13 @@ struct kvm_arch { * supported. */ bool return_nisv_io_abort_to_user; + + /* + * Use hardware management of dirty status (DBM) to log dirty pages. + * Userspace can enable this feature if KVM_CAP_ARM_HW_DIRTY_LOG is + * supported. + */ + bool hw_dirty_log; }; #define KVM_NR_MEM_OBJS 40 diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 90cb90561446..850cc5cbc6f0 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -87,6 +87,16 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, r = 0; kvm->arch.return_nisv_io_abort_to_user = true; break; +#ifdef CONFIG_ARM64_HW_AFDBM + case KVM_CAP_ARM_HW_DIRTY_LOG: + if ((cap->args[0] & ~1) || !kvm_hw_dbm_enabled()) { + r = -EINVAL; + } else { + r = 0; + kvm->arch.hw_dirty_log = cap->args[0]; + } + break; +#endif default: r = -EINVAL; break; diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index d3b209023727..52bb801c9b2c 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -83,6 +83,11 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext) r = has_vhe() && system_supports_address_auth() && system_supports_generic_auth(); break; +#ifdef CONFIG_ARM64_HW_AFDBM + case KVM_CAP_ARM_HW_DIRTY_LOG: + r = kvm_hw_dbm_enabled(); + break; +#endif /* CONFIG_ARM64_HW_AFDBM */ default: r = 0; } diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 4fdf30316582..e0b12c43397b 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1031,6 +1031,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_PPC_SECURE_GUEST 181 #define KVM_CAP_HALT_POLL 182 #define KVM_CAP_ASYNC_PF_INT 183 +#define KVM_CAP_ARM_HW_DIRTY_LOG 184 #ifdef KVM_CAP_IRQ_ROUTING diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h index fdd632c833b4..53908a8881a4 100644 --- a/tools/include/uapi/linux/kvm.h +++ b/tools/include/uapi/linux/kvm.h @@ -1017,6 +1017,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_S390_VCPU_RESETS 179 #define KVM_CAP_S390_PROTECTED 180 #define KVM_CAP_PPC_SECURE_GUEST 181 +#define KVM_CAP_ARM_HW_DIRTY_LOG 184 #ifdef KVM_CAP_IRQ_ROUTING From patchwork Tue Jun 16 09:35:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11607013 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1854E13A0 for ; Tue, 16 Jun 2020 09:39:42 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E682D20767 for ; Tue, 16 Jun 2020 09:39:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="A66y6C2n" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E682D20767 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=fgP8HyHoQvADox5+LheF6aDm7mhhihXIj9VZFZiBOLk=; b=A66y6C2nTq5KX/ B4DagJFYCGG7nzVtw2lVYLZ10jD1vQGJ5qA/MN0MzfGXn56CWNWhRBwvPdGR/kah4LVZr4m4gDaUf 5FsdpXo2xVmfVn+3rlwxhWDCa87vMfIWV+hp1gq/AjE0L3XfmHnZKbHcOK40MRXe26VvKKj6df9uN akd+TyaehMNd+VONatIIjYpnoAeKCLD+RmWvZx9LVsLWVYGO8TX70NSwo9eZJ+5SW5mGUwzBC6YYi eyVwrMxOwAjZr0JINybsLi+2fQmGVeNvuO2f04PyQ6sfsNShk8/p169fJMRPDgPsqGpsg08VuYQ0Y Dz6dnf5mQzjDJkPuyukA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl84C-0004bw-2w; Tue, 16 Jun 2020 09:39:40 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl814-0001nl-Kg for linux-arm-kernel@lists.infradead.org; Tue, 16 Jun 2020 09:36:34 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id AD27B10AC27B32B7709B; Tue, 16 Jun 2020 17:36:18 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.487.0; Tue, 16 Jun 2020 17:36:08 +0800 From: Keqian Zhu To: , , , Subject: [PATCH 06/12] KVM: arm64: Set DBM bit of PTEs during write protecting Date: Tue, 16 Jun 2020 17:35:47 +0800 Message-ID: <20200616093553.27512-7-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200616093553.27512-1-zhukeqian1@huawei.com> References: <20200616093553.27512-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200616_023626_853538_B76680A1 X-CRM114-Status: GOOD ( 10.89 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.35 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.35 listed in wl.mailspike.net] -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 SPF_PASS SPF: sender matches SPF record 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Keqian Zhu , Sean Christopherson , liangpeng10@huawei.com, Alexios Zavras , zhengxiang9@huawei.com, Mark Brown , James Morse , Marc Zyngier , wanghaibin.wang@huawei.com, Thomas Gleixner , Will Deacon , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org During write protecting PTEs, if hardware dirty log is enabled, set the DBM bit of PTEs when they are *already writable*. This ensures some mechanisms that rely on "write fault", such as CoW, are not broken. Signed-off-by: Keqian Zhu Signed-off-by: Peng Liang --- arch/arm64/kvm/mmu.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index f08b0fbca0a0..742c7943176f 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1536,19 +1536,24 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, /** * stage2_wp_ptes - write protect PMD range + * @kvm: kvm instance for the VM * @pmd: pointer to pmd entry * @addr: range start address * @end: range end address */ -static void stage2_wp_ptes(pmd_t *pmd, phys_addr_t addr, phys_addr_t end) +static void stage2_wp_ptes(struct kvm *kvm, pmd_t *pmd, + phys_addr_t addr, phys_addr_t end) { pte_t *pte; pte = pte_offset_kernel(pmd, addr); do { - if (!pte_none(*pte)) { - if (!kvm_s2pte_readonly(pte)) - kvm_set_s2pte_readonly(pte); + if (!pte_none(*pte) && !kvm_s2pte_readonly(pte)) { +#ifdef CONFIG_ARM64_HW_AFDBM + if (kvm->arch.hw_dirty_log && !kvm_s2pte_dbm(pte)) + kvm_set_s2pte_dbm(pte); +#endif + kvm_set_s2pte_readonly(pte); } } while (pte++, addr += PAGE_SIZE, addr != end); } @@ -1575,7 +1580,7 @@ static void stage2_wp_pmds(struct kvm *kvm, pud_t *pud, if (!kvm_s2pmd_readonly(pmd)) kvm_set_s2pmd_readonly(pmd); } else { - stage2_wp_ptes(pmd, addr, next); + stage2_wp_ptes(kvm, pmd, addr, next); } } } while (pmd++, addr = next, addr != end); From patchwork Tue Jun 16 09:35:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11607011 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 42E1E13A0 for ; Tue, 16 Jun 2020 09:39:33 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 130802074D for ; Tue, 16 Jun 2020 09:39:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="NAGgcSft" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 130802074D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=NkJmC7TamDOjRDOvMhnQTKhHZI5goDVZtEvPrkibNFU=; b=NAGgcSftdnA1NE moQVVXg4uxrtLFUKwXGbj8oQsPiuUv2FRbLJWCjJr4wxykku2dInrf15Z4BbJJHLIT64qW9QKs2AE w5wODMP8BmWjik++E4HJn72uMc/8rwgs1lNaJLxwENZfl0oclItDluU5XuUsSpxGoM4OvJ6t98Szg c8eIHqH6yZxzped4S/M3iR4XU67Jn0JjsweNKRLnybC+sjMjFocH27t+PvkW9i2fAJcBLt97UyrXa NvRty7ni1MAWgPe5drSalRCoo3+9q8WB2thb8adBI3Ajjtl1hc8Tc6cFNXMjy3prEsbdxXasxfsFp MMb2Ff6xSvR8nwP1M1ZA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl83x-0004Mp-SN; Tue, 16 Jun 2020 09:39:25 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl813-0001np-WD for linux-arm-kernel@lists.infradead.org; Tue, 16 Jun 2020 09:36:33 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id B60935F1AB9CCB5C0849; Tue, 16 Jun 2020 17:36:18 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.487.0; Tue, 16 Jun 2020 17:36:09 +0800 From: Keqian Zhu To: , , , Subject: [PATCH 07/12] KVM: arm64: Scan PTEs to sync dirty log Date: Tue, 16 Jun 2020 17:35:48 +0800 Message-ID: <20200616093553.27512-8-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200616093553.27512-1-zhukeqian1@huawei.com> References: <20200616093553.27512-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200616_023626_419649_90E5B294 X-CRM114-Status: GOOD ( 14.28 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.35 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.35 listed in wl.mailspike.net] -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 SPF_PASS SPF: sender matches SPF record 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Keqian Zhu , Sean Christopherson , liangpeng10@huawei.com, Alexios Zavras , zhengxiang9@huawei.com, Mark Brown , James Morse , Marc Zyngier , wanghaibin.wang@huawei.com, Thomas Gleixner , Will Deacon , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org For hardware management of dirty state, dirty state is stored in PTEs. We have to scan all PTEs to sync dirty log to memslot dirty bitmap. Signed-off-by: Keqian Zhu Signed-off-by: Peng Liang --- arch/arm64/include/asm/kvm_host.h | 2 + arch/arm64/kvm/arm.c | 6 +- arch/arm64/kvm/mmu.c | 162 ++++++++++++++++++++++++++++++ virt/kvm/kvm_main.c | 4 +- 4 files changed, 172 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 2bc3256759e3..910ec33afea8 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -489,6 +489,8 @@ void force_vm_exit(const cpumask_t *mask); void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot); void kvm_mmu_clear_dbm(struct kvm *kvm, struct kvm_memory_slot *memslot); void kvm_mmu_clear_dbm_all(struct kvm *kvm); +void kvm_mmu_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot); +void kvm_mmu_sync_dirty_log_all(struct kvm *kvm); int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, int exception_index); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 850cc5cbc6f0..92f0b40a30fa 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1209,7 +1209,11 @@ long kvm_arch_vcpu_ioctl(struct file *filp, void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) { - +#ifdef CONFIG_ARM64_HW_AFDBM + if (kvm->arch.hw_dirty_log) { + kvm_mmu_sync_dirty_log(kvm, memslot); + } +#endif } void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 742c7943176f..3aa0303d83f0 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -2600,6 +2600,168 @@ void kvm_mmu_clear_dbm_all(struct kvm *kvm) kvm_mmu_clear_dbm(kvm, memslot); } } + +/** + * stage2_sync_dirty_log_ptes() - synchronize dirty log from PMD range + * @kvm: The KVM pointer + * @pmd: pointer to pmd entry + * @addr: range start address + * @end: range end address + */ +static void stage2_sync_dirty_log_ptes(struct kvm *kvm, pmd_t *pmd, + phys_addr_t addr, phys_addr_t end) +{ + pte_t *pte; + + pte = pte_offset_kernel(pmd, addr); + do { + if (!pte_none(*pte) && !kvm_s2pte_readonly(pte)) + mark_page_dirty(kvm, addr >> PAGE_SHIFT); + } while (pte++, addr += PAGE_SIZE, addr != end); +} + +/** + * stage2_sync_dirty_log_pmds() - synchronize dirty log from PUD range + * @kvm: The KVM pointer + * @pud: pointer to pud entry + * @addr: range start address + * @end: range end address + */ +static void stage2_sync_dirty_log_pmds(struct kvm *kvm, pud_t *pud, + phys_addr_t addr, phys_addr_t end) +{ + pmd_t *pmd; + phys_addr_t next; + + pmd = stage2_pmd_offset(kvm, pud, addr); + do { + next = stage2_pmd_addr_end(kvm, addr, end); + if (!pmd_none(*pmd) && !pmd_thp_or_huge(*pmd)) + stage2_sync_dirty_log_ptes(kvm, pmd, addr, next); + } while (pmd++, addr = next, addr != end); +} + +/** + * stage2_sync_dirty_log_puds() - synchronize dirty log from P4D range + * @kvm: The KVM pointer + * @pgd: pointer to pgd entry + * @addr: range start address + * @end: range end address + */ +static void stage2_sync_dirty_log_puds(struct kvm *kvm, p4d_t *p4d, + phys_addr_t addr, phys_addr_t end) +{ + pud_t *pud; + phys_addr_t next; + + pud = stage2_pud_offset(kvm, p4d, addr); + do { + next = stage2_pud_addr_end(kvm, addr, end); + if (!stage2_pud_none(kvm, *pud) && !stage2_pud_huge(kvm, *pud)) + stage2_sync_dirty_log_pmds(kvm, pud, addr, next); + } while (pud++, addr = next, addr != end); +} + +/** + * stage2_sync_dirty_log_p4ds() - synchronize dirty log from PGD range + * @kvm: The KVM pointer + * @pgd: pointer to pgd entry + * @addr: range start address + * @end: range end address + */ +static void stage2_sync_dirty_log_p4ds(struct kvm *kvm, pgd_t *pgd, + phys_addr_t addr, phys_addr_t end) +{ + p4d_t *p4d; + phys_addr_t next; + + p4d = stage2_p4d_offset(kvm, pgd, addr); + do { + next = stage2_p4d_addr_end(kvm, addr, end); + if (!stage2_p4d_none(kvm, *p4d)) + stage2_sync_dirty_log_puds(kvm, p4d, addr, next); + } while (p4d++, addr = next, addr != end); +} + +/** + * stage2_sync_dirty_log_range() - synchronize dirty log from stage2 memory + * region range + * @kvm: The KVM pointer + * @addr: Start address of range + * @end: End address of range + */ +static void stage2_sync_dirty_log_range(struct kvm *kvm, phys_addr_t addr, + phys_addr_t end) +{ + pgd_t *pgd; + phys_addr_t next; + + pgd = kvm->arch.pgd + stage2_pgd_index(kvm, addr); + do { + cond_resched_lock(&kvm->mmu_lock); + if (!READ_ONCE(kvm->arch.pgd)) + break; + next = stage2_pgd_addr_end(kvm, addr, end); + if (stage2_pgd_present(kvm, *pgd)) + stage2_sync_dirty_log_p4ds(kvm, pgd, addr, next); + } while (pgd++, addr = next, addr != end); +} + +/** + * kvm_mmu_sync_dirty_log() - synchronize dirty log from stage2 PTEs for + * memory slot + * @kvm: The KVM pointer + * @slot: The memory slot to synchronize dirty log + * + * Called to synchronize dirty log (marked by hw) after memory region + * KVM_GET_DIRTY_LOG operation is called. After this function returns + * all dirty log information (for that hw will modify page tables during + * this routine, it is true only when guest is stopped, but it is OK + * because we won't miss dirty log finally.) are collected into memslot + * dirty_bitmap. Afterwards dirty_bitmap can be copied to userspace. + * + * Acquires kvm_mmu_lock. Called with kvm->slots_lock mutex acquired, + * serializing operations for VM memory regions. + */ +void kvm_mmu_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) +{ + phys_addr_t start = memslot->base_gfn << PAGE_SHIFT; + phys_addr_t end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT; + int idx; + + if (WARN_ON_ONCE(!memslot->dirty_bitmap)) + return; + + idx = srcu_read_lock(&kvm->srcu); + spin_lock(&kvm->mmu_lock); + + stage2_sync_dirty_log_range(kvm, start, end); + + spin_unlock(&kvm->mmu_lock); + srcu_read_unlock(&kvm->srcu, idx); +} + +/** + * kvm_mmu_sync_dirty_log_all() - synchronize dirty log from PTEs for whole VM + * @kvm: The KVM pointer + * + * Called with kvm->slots_lock mutex acquired + */ +void kvm_mmu_sync_dirty_log_all(struct kvm *kvm) +{ + struct kvm_memslots *slots = kvm_memslots(kvm); + struct kvm_memory_slot *memslots = slots->memslots; + struct kvm_memory_slot *memslot; + int slot; + + if (unlikely(!slots->used_slots)) + return; + + for (slot = 0; slot < slots->used_slots; slot++) { + memslot = &memslots[slot]; + kvm_mmu_sync_dirty_log(kvm, memslot); + } +} #endif /* CONFIG_ARM64_HW_AFDBM */ void kvm_arch_commit_memory_region(struct kvm *kvm, diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index a852af5c3214..3722343fd460 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2581,7 +2581,9 @@ static void mark_page_dirty_in_slot(struct kvm_memory_slot *memslot, if (memslot && memslot->dirty_bitmap) { unsigned long rel_gfn = gfn - memslot->base_gfn; - set_bit_le(rel_gfn, memslot->dirty_bitmap); + /* Speed up if this bit has already been set */ + if (!test_bit_le(rel_gfn, memslot->dirty_bitmap)) + set_bit_le(rel_gfn, memslot->dirty_bitmap); } } From patchwork Tue Jun 16 09:35:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11606997 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 89E7613B6 for ; Tue, 16 Jun 2020 09:37:55 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 426782074D for ; Tue, 16 Jun 2020 09:37:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="iYql0VQb" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 426782074D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Q6l2w8/vWhzb6mIsYRX+//Q8NzCJv7sbB7GTA8o7jCw=; b=iYql0VQbYwCwmp 69UpLPrLhATDMaLyZhCYfQb7hkILI52C0OZteyxSlU/LfTymxe1mX+DDBHNCvR4TdABICuHoT633L 0PTIOcyMxg3p6SYV0BVhZL0+j1HnUrQQKoHBxTeBWTMw5bVuv/4GslxnDKJHZD4ojm9V4rsEeFUi8 L7cN5xYmwi7hfFBnmfmXSGaDSs/OV50ZfCvcTdSEdI6/jkVpbVIglTKegEKHrlxR6jEU0Jr7Pl371 T8lYSTzBwqiA1NxvlNduH09HVCpNs5LCm135rzT007ZJMe/ni6yfhufqMrt7jAJ3h2ebvO/rDQTdW GE/rjvIkGvWgur/SPcVg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl82M-0002rq-Rw; Tue, 16 Jun 2020 09:37:46 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl810-0001nJ-6Z for linux-arm-kernel@lists.infradead.org; Tue, 16 Jun 2020 09:36:27 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id DC5347673396AED17823; Tue, 16 Jun 2020 17:36:18 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.487.0; Tue, 16 Jun 2020 17:36:10 +0800 From: Keqian Zhu To: , , , Subject: [PATCH 08/12] KVM: Omit dirty log sync in log clear if initially all set Date: Tue, 16 Jun 2020 17:35:49 +0800 Message-ID: <20200616093553.27512-9-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200616093553.27512-1-zhukeqian1@huawei.com> References: <20200616093553.27512-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200616_023622_543222_182C23B5 X-CRM114-Status: GOOD ( 10.18 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.191 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.191 listed in wl.mailspike.net] -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 SPF_PASS SPF: sender matches SPF record 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Keqian Zhu , Sean Christopherson , liangpeng10@huawei.com, Alexios Zavras , zhengxiang9@huawei.com, Mark Brown , James Morse , Marc Zyngier , wanghaibin.wang@huawei.com, Thomas Gleixner , Will Deacon , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Synchronizing dirty log during log clear is useful only when the dirty bitmap of userspace contains dirty bits that memslot dirty bitmap does not contain, because we can sync new dirty bits to memslot dirty bitmap, then we can clear them by the way and avoid reporting them to userspace later. With dirty bitmap "initially all set" feature, the above situation will not appear if userspace logic is normal, so we can omit dirty log sync in log clear. This is valuable when dirty log sync is a high-cost operation, such as arm64 DBM. Signed-off-by: Keqian Zhu --- virt/kvm/kvm_main.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 3722343fd460..6c147d6f9da6 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1554,7 +1554,8 @@ static int kvm_clear_dirty_log_protect(struct kvm *kvm, (log->num_pages < memslot->npages - log->first_page && (log->num_pages & 63))) return -EINVAL; - kvm_arch_sync_dirty_log(kvm, memslot); + if (!kvm_dirty_log_manual_protect_and_init_set(kvm)) + kvm_arch_sync_dirty_log(kvm, memslot); flush = false; dirty_bitmap_buffer = kvm_second_dirty_bitmap(memslot); From patchwork Tue Jun 16 09:35:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11606989 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 11CB513B6 for ; Tue, 16 Jun 2020 09:37:27 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C9F4F2074D for ; Tue, 16 Jun 2020 09:37:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="idNO/Fe5" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C9F4F2074D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=RL/aQsxkbb1bripPPmH/Fshm1ZsCG7GhSyjPY3WoFXY=; b=idNO/Fe5q+2Fp0 JNU7cUpSi2hJ7VHtutQKDYha728Gh/jh19tWlwKgsGxM90onqz+WoB4b7PCJL/cDGTcHP3OrOvkGd 7MpIuNnz3lse2gzgmyrbG3NrbdkIIJSxNPNTEzMV7MbyZG+krKLVsbX7aWXDaHLbmXiB1MmW0lhTH TzTkTTxbcwI1UhJ/D/jJPqS0I25xeFJshCk/4uMKEDMOK5rZyuGWFXZDezZKR80atEj90bq1ctA1K hh6EZSGVZtCPE7VpysPQ1Zk0LUyYzjQCmqnJp9i08CX+ZYC2ZMzVdNq97dRBDopccH2BJqzUtrJx0 9jO389ZgRtC4GxJ8jX3g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl81w-0002Tg-Qx; Tue, 16 Jun 2020 09:37:20 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl810-0001n7-6W for linux-arm-kernel@lists.infradead.org; Tue, 16 Jun 2020 09:36:27 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id CB61C6A3FCE268A0B19E; Tue, 16 Jun 2020 17:36:18 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.487.0; Tue, 16 Jun 2020 17:36:11 +0800 From: Keqian Zhu To: , , , Subject: [PATCH 09/12] KVM: arm64: Steply write protect page table by mask bit Date: Tue, 16 Jun 2020 17:35:50 +0800 Message-ID: <20200616093553.27512-10-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200616093553.27512-1-zhukeqian1@huawei.com> References: <20200616093553.27512-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200616_023622_527191_F9B90AB2 X-CRM114-Status: GOOD ( 10.76 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.191 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.191 listed in wl.mailspike.net] -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 SPF_PASS SPF: sender matches SPF record 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Keqian Zhu , Sean Christopherson , liangpeng10@huawei.com, Alexios Zavras , zhengxiang9@huawei.com, Mark Brown , James Morse , Marc Zyngier , wanghaibin.wang@huawei.com, Thomas Gleixner , Will Deacon , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org During dirty log clear, page table entries are write protected according to a mask. In the past we write protect all entries corresponding to the mask from ffs to fls. Though there may be zero bits between this range, we are holding the kvm mmu lock so we won't write protect entries that we don't want to. We are about to add support for hardware management of dirty state to arm64, holding kvm mmu lock will be not enough. We should write protect entries steply by mask bit. Signed-off-by: Keqian Zhu --- arch/arm64/kvm/mmu.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 3aa0303d83f0..898e272a2c07 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1710,10 +1710,16 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, gfn_t gfn_offset, unsigned long mask) { phys_addr_t base_gfn = slot->base_gfn + gfn_offset; - phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT; - phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; + phys_addr_t start, end; + u32 i; - stage2_wp_range(kvm, start, end); + for (i = __ffs(mask); i <= __fls(mask); i++) { + if (test_bit_le(i, &mask)) { + start = (base_gfn + i) << PAGE_SHIFT; + end = (base_gfn + i + 1) << PAGE_SHIFT; + stage2_wp_range(kvm, start, end); + } + } } /* From patchwork Tue Jun 16 09:35:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11607041 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 109941392 for ; Tue, 16 Jun 2020 09:52:20 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E2F592071A for ; Tue, 16 Jun 2020 09:52:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="KkImnHT2"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="NhnR1LzD" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E2F592071A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=oLUB6NbWo28TabwTIkQNnHmiRdgADlcx0FssqLTzj5k=; b=KkImnHT214S3kf kt0y1dsn4X9JSDO7FvucK0jA1oq4cLl8Cir1qBoIyO+ocLIigmuFC9wjoX0xW+d8FzODURvei/fRa McJVhd41HmrjWLa6RN53MMZYLl7YsznRVIW79ik8j10GR16j1bcLmDEj28W6mawCssL2G0AqR7Zpf +zgi/mVE1IEg6iEGVCNGlmkTzQhpkfk+zx+atSn6qA2UprdeMlxZS0cPWW4FJCN2wNcvJDwdbuDpF The2joRsBy0q4UJ12xbm0hHj8CdDW7q1+rve9VNBU0DnDI/DIYbwZ+RThzpap09laaSbcO+YgZplO 1YWkUQ6YMPocAgdUFStg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl8GO-0005YH-Rc; Tue, 16 Jun 2020 09:52:16 +0000 Received: from merlin.infradead.org ([2001:8b0:10b:1231::1]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl8GD-0004nJ-Ur for linux-arm-kernel@bombadil.infradead.org; Tue, 16 Jun 2020 09:52:05 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=M4SEgvz0nyb9jcZO7f0rIys4Vap0y4Kr6/0oBvgK0ns=; b=NhnR1LzDkKU+D3fZC/UneKn4ie qGczg9okttLEg1qoi8Qpw4AkHD7St8bUxyqX62O0mKqVwgsFHPcvuee2yw8JM+ZQCop9nJyGr8Rkj hs6lZqs+rj25ZacH134AzZD0bu/wRFsWExW1KFzh4DPjh3NIsrWdFqmlSyrrI22ITJhtWULw7d1ei TScjqWLI6zLdxKF5c1vIcmUFPf0R/IfiRExHQ/ea1DSFAu/619UgVfrBjcbeXQsKjJ3Ccjn9eb7pB nNH9ZG6DweI1JUgWPITxSCoQLPGScMBCAMn3PkYCjQiWBIC/OLsA4gviq04IMz+YaXtdGsSGw0OLc EUj7vEHQ==; Received: from szxga05-in.huawei.com ([45.249.212.191] helo=huawei.com) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl81G-0005aA-67 for linux-arm-kernel@lists.infradead.org; Tue, 16 Jun 2020 09:36:39 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id D41EBD40471D8FD3AFBD; Tue, 16 Jun 2020 17:36:18 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.487.0; Tue, 16 Jun 2020 17:36:12 +0800 From: Keqian Zhu To: , , , Subject: [PATCH 10/12] KVM: arm64: Save stage2 PTE dirty status if it is coverred Date: Tue, 16 Jun 2020 17:35:51 +0800 Message-ID: <20200616093553.27512-11-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200616093553.27512-1-zhukeqian1@huawei.com> References: <20200616093553.27512-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected X-Spam-Note: CRM114 invocation failed X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.191 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.191 listed in wl.mailspike.net] -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 SPF_PASS SPF: sender matches SPF record 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Keqian Zhu , Sean Christopherson , liangpeng10@huawei.com, Alexios Zavras , zhengxiang9@huawei.com, Mark Brown , James Morse , Marc Zyngier , wanghaibin.wang@huawei.com, Thomas Gleixner , Will Deacon , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org There are two types of operations will change PTE and may cover dirty status set by hardware. 1. Stage2 PTE unmapping: Page table merging (revert of huge page table dissolving), kvm_unmap_hva_range() and so on. 2. Stage2 PTE changing: including user_mem_abort(), kvm_mmu_notifier _change_pte() and so on. All operations above will invoke kvm_set_pte() finally. We should save the dirty status into memslot bitmap. Question: Should we acquire kvm_slots_lock when invoke mark_page_dirty? It seems that user_mem_abort does not acquire this lock when invoke it. Signed-off-by: Keqian Zhu --- arch/arm64/kvm/mmu.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 898e272a2c07..a230fbcf3889 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -294,15 +294,23 @@ static void unmap_stage2_ptes(struct kvm *kvm, pmd_t *pmd, { phys_addr_t start_addr = addr; pte_t *pte, *start_pte; + bool dirty_coverred; + int idx; start_pte = pte = pte_offset_kernel(pmd, addr); do { if (!pte_none(*pte)) { pte_t old_pte = *pte; - kvm_set_pte(pte, __pte(0)); + dirty_coverred = kvm_set_pte(pte, __pte(0)); kvm_tlb_flush_vmid_ipa(kvm, addr); + if (dirty_coverred) { + idx = srcu_read_lock(&kvm->srcu); + mark_page_dirty(kvm, addr >> PAGE_SHIFT); + srcu_read_unlock(&kvm->srcu, idx); + } + /* No need to invalidate the cache for device mappings */ if (!kvm_is_device_pfn(pte_pfn(old_pte))) kvm_flush_dcache_pte(old_pte); @@ -1388,6 +1396,8 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, pte_t *pte, old_pte; bool iomap = flags & KVM_S2PTE_FLAG_IS_IOMAP; bool logging_active = flags & KVM_S2_FLAG_LOGGING_ACTIVE; + bool dirty_coverred; + int idx; VM_BUG_ON(logging_active && !cache); @@ -1453,8 +1463,14 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, if (pte_val(old_pte) == pte_val(*new_pte)) return 0; - kvm_set_pte(pte, __pte(0)); + dirty_coverred = kvm_set_pte(pte, __pte(0)); kvm_tlb_flush_vmid_ipa(kvm, addr); + + if (dirty_coverred) { + idx = srcu_read_lock(&kvm->srcu); + mark_page_dirty(kvm, addr >> PAGE_SHIFT); + srcu_read_unlock(&kvm->srcu, idx); + } } else { get_page(virt_to_page(pte)); } From patchwork Tue Jun 16 09:35:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11607039 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 68E08138C for ; Tue, 16 Jun 2020 09:52:10 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 449C12071A for ; Tue, 16 Jun 2020 09:52:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="VZ1l2zfb"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="J+hlfqyo" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 449C12071A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=bI+nlXnr24dGcaP6r3VkyTmqsJwfmIaLRzujDf003hk=; b=VZ1l2zfbyuiz7I +u8lqGNnO9yxlFIfZJz8pj4RWTxrvTHmyPb8W39pQdFxmEzGWKFUtZ/Dr0KAFyjtiQAhIXWTsFsSF l90Uy0rzFtza0WaOjP8cO2rTBfd04bTO9eN9OMRQl7E3zUAIoXZak9ZPSR0ElkuHSM9suL0KwBZ+N mxiOY3GjyogGvIIvwS/jbl7/Uc6DCz4o8x7UL4oalBLrUaWqQ15PYZcuKTRGAI9szXp//k7gcbxfj rUIr4Rq/Zbfu4rtLL7E7hfPw93n9nD+lVSIf6s5s6rcStXur+Bs5EIDzfl4auCOTQjznLXPVKB4cQ pm2uJ24WFylphAYqyfnw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl8GE-0005OD-9B; Tue, 16 Jun 2020 09:52:06 +0000 Received: from merlin.infradead.org ([2001:8b0:10b:1231::1]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl8GC-0004nf-UY for linux-arm-kernel@bombadil.infradead.org; Tue, 16 Jun 2020 09:52:04 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=DtVxqmqpPYbHhqDrZlQrL9WsAH7BiciCaEg3bgqF05E=; b=J+hlfqyoxTsUiyyyIlFkllgCHL 4NniOuTdrKDvJCD/HFoX/0Fxdpyn3AM/N/vgsTO3re4WaX1wVmcw1U0lgWpd98pb0OXMC5zEJkGFC BES2tSm5qQLW6nH5kFmAFsPR08HfJa+r+FLQYelDTIKByGx/Gcf+kbryyf/CRZvVwlJmv6tezJJDj Mg84ZJKFt0/ArvUoKOgfo4MOT8j4an+UHfALu4XvJx49PxD7TSEPPetCxrGAvDyalIz3hxOofP4OR nK3LHxfUDAzjytBYwV8Pwack0rM8bwJq5jjU8QZPdmU+9jkCDk7q+uHQTA6Jr+gQvxmRR8sb0FNry W8jv32cA==; Received: from szxga05-in.huawei.com ([45.249.212.191] helo=huawei.com) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl81G-0005aH-6y for linux-arm-kernel@lists.infradead.org; Tue, 16 Jun 2020 09:36:40 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id B9B15896D93109BC2DD5; Tue, 16 Jun 2020 17:36:23 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.487.0; Tue, 16 Jun 2020 17:36:13 +0800 From: Keqian Zhu To: , , , Subject: [PATCH 11/12] KVM: arm64: Support disable hw dirty log after enable Date: Tue, 16 Jun 2020 17:35:52 +0800 Message-ID: <20200616093553.27512-12-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200616093553.27512-1-zhukeqian1@huawei.com> References: <20200616093553.27512-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected X-Spam-Note: CRM114 invocation failed X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.191 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.191 listed in wl.mailspike.net] -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 SPF_PASS SPF: sender matches SPF record 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Keqian Zhu , Sean Christopherson , liangpeng10@huawei.com, Alexios Zavras , zhengxiang9@huawei.com, Mark Brown , James Morse , Marc Zyngier , wanghaibin.wang@huawei.com, Thomas Gleixner , Will Deacon , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org We should clear DBM bit of all PTEs and flush TLB, then sync dirty log, which promise we won't miss any dirty status set by hardware. Signed-off-by: Keqian Zhu --- arch/arm64/kvm/arm.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 92f0b40a30fa..76cab4c0b5a6 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -93,6 +93,12 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, r = -EINVAL; } else { r = 0; + if (kvm->arch.hw_dirty_log && !cap->args[0]) { + mutex_lock(&kvm->slots_lock); + kvm_mmu_clear_dbm_all(kvm); + kvm_mmu_sync_dirty_log_all(kvm); + mutex_unlock(&kvm->slots_lock); + } kvm->arch.hw_dirty_log = cap->args[0]; } break; From patchwork Tue Jun 16 09:35:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11607001 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 41B8213B6 for ; Tue, 16 Jun 2020 09:38:30 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1FF062074D for ; Tue, 16 Jun 2020 09:38:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="cbI/fOEC" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1FF062074D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=FjmGlUPp+9JAPjjudXiNhhKjWyCizoZQQG0l/6VYSSM=; b=cbI/fOEC5WyipS acyhz01Nz2scWwKJKLzYSyjAquS/qQhftbA0mxIIskQzvGLlububVG4fzGaMvHqA1rM7yHfPoWvgF hZ+xIme7ehkZXJ/pjtRLLCs4oZb8+GyKti1Vl4TjS3geSwJOhnyQ01iPgDfdf1mmP4iVZDsEZzAU3 lqsbvzbPME5WUTA69GOZY5uzwyFl+pEX9QC/1BNIdyenHrNB5pgH7SMRyQ6tB8Vg1fNXD2aQi1IY1 bI6XhIx7yOA/OGsYKGSRSpXMFsDQAjIIfhdW5OLTnAzPoQGXWR5o7pSKomSwHzTosj7z/bEPCj2Io 173Crimgab4N1jyNLlVw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl82w-0003RF-GH; Tue, 16 Jun 2020 09:38:22 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jl813-0001sD-Vw for linux-arm-kernel@lists.infradead.org; Tue, 16 Jun 2020 09:36:32 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id B1B0AEC54AB3263F89AF; Tue, 16 Jun 2020 17:36:23 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.487.0; Tue, 16 Jun 2020 17:36:14 +0800 From: Keqian Zhu To: , , , Subject: [PATCH 12/12] KVM: arm64: Enable stage2 hardware DBM Date: Tue, 16 Jun 2020 17:35:53 +0800 Message-ID: <20200616093553.27512-13-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200616093553.27512-1-zhukeqian1@huawei.com> References: <20200616093553.27512-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200616_023626_213817_995F7BAE X-CRM114-Status: UNSURE ( 8.96 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.191 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.191 listed in wl.mailspike.net] -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 SPF_PASS SPF: sender matches SPF record 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Keqian Zhu , Sean Christopherson , liangpeng10@huawei.com, Alexios Zavras , zhengxiang9@huawei.com, Mark Brown , James Morse , Marc Zyngier , wanghaibin.wang@huawei.com, Thomas Gleixner , Will Deacon , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org We are ready to support hw management of dirty state, enable it if hardware support it. Signed-off-by: Keqian Zhu Signed-off-by: Peng Liang --- arch/arm64/include/asm/sysreg.h | 2 ++ arch/arm64/kvm/reset.c | 9 ++++++++- 2 files changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 463175f80341..b22bd903284d 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -744,6 +744,8 @@ #define ID_AA64MMFR1_VMIDBITS_8 0 #define ID_AA64MMFR1_VMIDBITS_16 2 +#define ID_AA64MMFR1_HADBS_DBS 2 + /* id_aa64mmfr2 */ #define ID_AA64MMFR2_E0PD_SHIFT 60 #define ID_AA64MMFR2_FWB_SHIFT 40 diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 52bb801c9b2c..c1215b13bdd5 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -427,7 +427,7 @@ int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type) { u64 vtcr = VTCR_EL2_FLAGS, mmfr0; u32 parange, phys_shift; - u8 lvls; + u8 lvls, hadbs; if (type & ~KVM_VM_TYPE_ARM_IPA_SIZE_MASK) return -EINVAL; @@ -465,6 +465,13 @@ int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type) */ vtcr |= VTCR_EL2_HA; +#ifdef CONFIG_ARM64_HW_AFDBM + hadbs = (read_sysreg(id_aa64mmfr1_el1) >> + ID_AA64MMFR1_HADBS_SHIFT) & 0xf; + if (hadbs == ID_AA64MMFR1_HADBS_DBS) + vtcr |= VTCR_EL2_HD; +#endif + /* Set the vmid bits */ vtcr |= (kvm_get_vmid_bits() == 16) ? VTCR_EL2_VS_16BIT :