From patchwork Mon May 25 11:24:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11568617 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6266E1391 for ; Mon, 25 May 2020 11:27:01 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 40F56207FB for ; Mon, 25 May 2020 11:27:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="uRWB0kGC" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 40F56207FB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=EBJz5/W4NSmM9wpdCNEuJNvf1rfLlSUag/oa1EYnFkE=; b=uRWB0kGChtuKp/ 2WgrbOx3bMVxPzqwqntWzBIvfnawpOI8tDrEKmrnhDQz00kb0YrfQzO6Q8ZWgfgB1U+n0bpKKzSV2 haKybSDVZWdlHbUQWG9mvwRmEuQjCkc5fpTvQ0pFav0T68vwVpH/jun7O9FHmr8zFLatahCntxqcL wUq9oJcysP+YWxbMLNwpuqdLq9uzQNgg7OyylgjcbWlWVrOnEL9wMhjha29fZVNozLdD24f+0/Ace oDiWS9UsCmKqjOwPLh1F0Z3/oyPJ3vUBI8XecAvzRGx8rGoZ4A13SWa0naSP8Uy6y/KqRaOpWKB0p 3a5FW5NaCYgTBkPfwojQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jdBFw-0007g0-P0; Mon, 25 May 2020 11:26:56 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jdBEN-0005jq-MR for linux-arm-kernel@lists.infradead.org; Mon, 25 May 2020 11:25:23 +0000 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 6277791ED5DCDAB77912; Mon, 25 May 2020 19:25:09 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.487.0; Mon, 25 May 2020 19:25:03 +0800 From: Keqian Zhu To: , , , Subject: [RFC PATCH 2/7] KVM: arm64: Set DBM bit of PTEs if hw DBM enabled Date: Mon, 25 May 2020 19:24:01 +0800 Message-ID: <20200525112406.28224-3-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200525112406.28224-1-zhukeqian1@huawei.com> References: <20200525112406.28224-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200525_042519_929813_C548077A X-CRM114-Status: GOOD ( 12.40 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.32 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.32 listed in wl.mailspike.net] 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Keqian Zhu , Sean Christopherson , Peng Liang , Alexios Zavras , zhengxiang9@huawei.com, Mark Brown , James Morse , Marc Zyngier , wanghaibin.wang@huawei.com, Thomas Gleixner , Will Deacon , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org In user_mem_abort, for normal case (mem_type is PAGE_S2), set DBM bit of PTEs if hw DBM enabled. We also check and set DBM bit during write protect PTEs to make it works well if we miss some cases. Signed-off-by: Keqian Zhu Signed-off-by: Peng Liang --- arch/arm64/include/asm/pgtable-prot.h | 1 + virt/kvm/arm/mmu.c | 14 +++++++++++++- 2 files changed, 14 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index 1305e28225fc..f9910ba2afd8 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -79,6 +79,7 @@ extern bool arm64_use_ng_mappings; }) #define PAGE_S2 __pgprot(_PROT_DEFAULT | PAGE_S2_MEMATTR(NORMAL) | PTE_S2_RDONLY | PAGE_S2_XN) +#define PAGE_S2_DBM __pgprot(_PROT_DEFAULT | PAGE_S2_MEMATTR(NORMAL) | PTE_S2_RDONLY | PAGE_S2_XN | PTE_DBM) #define PAGE_S2_DEVICE __pgprot(_PROT_DEFAULT | PAGE_S2_MEMATTR(DEVICE_nGnRE) | PTE_S2_RDONLY | PTE_S2_XN) #define PAGE_NONE __pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN) diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index e3b9ee268823..dc97988eb2e0 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1426,6 +1426,10 @@ static void stage2_wp_ptes(pmd_t *pmd, phys_addr_t addr, phys_addr_t end) pte = pte_offset_kernel(pmd, addr); do { if (!pte_none(*pte)) { +#ifdef CONFIG_ARM64_HW_AFDBM + if (kvm_hw_dbm_enabled() && !kvm_s2pte_dbm(pte)) + kvm_set_s2pte_dbm(pte); +#endif if (!kvm_s2pte_readonly(pte)) kvm_set_s2pte_readonly(pte); } @@ -1827,7 +1831,15 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd); } else { - pte_t new_pte = kvm_pfn_pte(pfn, mem_type); + pte_t new_pte; + +#ifdef CONFIG_ARM64_HW_AFDBM + if (kvm_hw_dbm_enabled() && + pgprot_val(mem_type) == pgprot_val(PAGE_S2)) { + mem_type = PAGE_S2_DBM; + } +#endif + new_pte = kvm_pfn_pte(pfn, mem_type); if (writable) { new_pte = kvm_s2pte_mkwrite(new_pte);