From patchwork Mon May 25 11:24:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11568597 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7696C1391 for ; Mon, 25 May 2020 11:25:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6805A20723 for ; Mon, 25 May 2020 11:25:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390199AbgEYLZd (ORCPT ); Mon, 25 May 2020 07:25:33 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:46934 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2390172AbgEYLZR (ORCPT ); Mon, 25 May 2020 07:25:17 -0400 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 737446457F0D4572A428; Mon, 25 May 2020 19:25:14 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.487.0; Mon, 25 May 2020 19:25:05 +0800 From: Keqian Zhu To: , , , CC: Catalin Marinas , Marc Zyngier , James Morse , Will Deacon , "Suzuki K Poulose" , Sean Christopherson , Julien Thierry , Mark Brown , "Thomas Gleixner" , Andrew Morton , Alexios Zavras , , , Keqian Zhu Subject: [RFC PATCH 4/7] KVM: arm64: Steply write protect page table by mask bit Date: Mon, 25 May 2020 19:24:03 +0800 Message-ID: <20200525112406.28224-5-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200525112406.28224-1-zhukeqian1@huawei.com> References: <20200525112406.28224-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org During dirty log clear, page table entries are write protected according to a mask. In the past we write protect all entries corresponding to the mask from ffs to fls. Though there may be zero bits between this range, we are holding the kvm mmu lock so we won't write protect entries that we don't want to. We are about to add support for hardware management of dirty state to arm64, holding kvm mmu lock will be not enough. We should write protect entries steply by mask bit. Signed-off-by: Keqian Zhu --- virt/kvm/arm/mmu.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index ff8df9702e04..779859b85d6d 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1568,10 +1568,16 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, gfn_t gfn_offset, unsigned long mask) { phys_addr_t base_gfn = slot->base_gfn + gfn_offset; - phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT; - phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; + phys_addr_t start, end; + u32 i; - stage2_wp_range(kvm, start, end); + for (i = __ffs(mask); i <= __fls(mask); i++) { + if (test_bit(i, &mask)) { + start = (base_gfn + i) << PAGE_SHIFT; + end = (base_gfn + i + 1) << PAGE_SHIFT; + stage2_wp_range(kvm, start, end); + } + } } /*