From patchwork Fri Aug 25 09:35:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 13365367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7FD9CC3DA66 for ; Fri, 25 Aug 2023 09:37:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=RPXlzi3uO5g7Zd7aNORuYEKAavlrfWRHy++RMBSbeyA=; b=s9Ve5ugDbbzMcF qNo9mkQkJPwJHEn+mjdEDYMyynKGoVJ+g07onSYhSNoEVdoWrDCKiIkScYDKfVi6esmesJpk3RBgU J0so0sTHJrA3K4iSaWfLEY10HpbxUj6cW+mKv8GK+7sjDobP5596gQXbpPY4N3h/nOgf5tI1OfwPq Q1RBvb+YloLdRWM7Z7QQ0OVS94rjZcdodRSqgI06qkPUnb7O2AUNeldOJDXFJhCWRiAuRAYC4W4bi w8phJGRLaeVbDhnS1/DbgsM5dO6ExfgDvXeDGXAiwmTAlPKwfj3CT1oBz2Dk8qayGfqUbIeUAeidi RYB10pteQbVsfqXal3VA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qZTFY-004kei-19; Fri, 25 Aug 2023 09:37:04 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qZTFV-004kdA-0H for linux-arm-kernel@lists.infradead.org; Fri, 25 Aug 2023 09:37:02 +0000 Received: from lhrpeml500005.china.huawei.com (unknown [172.18.147.207]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4RXF9V6pKpz684JJ; Fri, 25 Aug 2023 17:32:46 +0800 (CST) Received: from A2006125610.china.huawei.com (10.202.227.178) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Fri, 25 Aug 2023 10:36:52 +0100 From: Shameer Kolothum To: , , , , , , CC: , , , , , Subject: [RFC PATCH v2 5/8] KVM: arm64: Add some HW_DBM related mmu interfaces Date: Fri, 25 Aug 2023 10:35:25 +0100 Message-ID: <20230825093528.1637-6-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20230825093528.1637-1-shameerali.kolothum.thodi@huawei.com> References: <20230825093528.1637-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.178] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhrpeml500005.china.huawei.com (7.191.163.240) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230825_023701_483102_D4A5C5C8 X-CRM114-Status: UNSURE ( 7.30 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Keqian Zhu This adds set_dbm, clear_dbm and sync_dirty interfaces in mmu layer. They simply wrap those interfaces of pgtable layer. Signed-off-by: Keqian Zhu Signed-off-by: Shameer Kolothum --- arch/arm64/include/asm/kvm_mmu.h | 7 +++++++ arch/arm64/kvm/mmu.c | 30 ++++++++++++++++++++++++++++++ 2 files changed, 37 insertions(+) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 0e1e1ab17b4d..86e1e074337b 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -170,6 +170,13 @@ int create_hyp_exec_mappings(phys_addr_t phys_addr, size_t size, void **haddr); void __init free_hyp_pgds(void); +void kvm_stage2_clear_dbm(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn_offset, unsigned long npages); +void kvm_stage2_set_dbm(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn_offset, unsigned long npages); +void kvm_stage2_sync_dirty(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn_offset, unsigned long npages); + void stage2_unmap_vm(struct kvm *kvm); int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long type); void kvm_uninit_stage2_mmu(struct kvm *kvm); diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index b16aff3f65f6..f5ae4b97df4d 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1149,6 +1149,36 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, kvm_mmu_split_huge_pages(kvm, start, end); } +void kvm_stage2_clear_dbm(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn_offset, unsigned long npages) +{ + phys_addr_t base_gfn = slot->base_gfn + gfn_offset; + phys_addr_t addr = base_gfn << PAGE_SHIFT; + phys_addr_t end = (base_gfn + npages) << PAGE_SHIFT; + + stage2_apply_range_resched(&kvm->arch.mmu, addr, end, kvm_pgtable_stage2_clear_dbm); +} + +void kvm_stage2_set_dbm(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn_offset, unsigned long npages) +{ + phys_addr_t base_gfn = slot->base_gfn + gfn_offset; + phys_addr_t addr = base_gfn << PAGE_SHIFT; + phys_addr_t end = (base_gfn + npages) << PAGE_SHIFT; + + stage2_apply_range(&kvm->arch.mmu, addr, end, kvm_pgtable_stage2_set_dbm, false); +} + +void kvm_stage2_sync_dirty(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn_offset, unsigned long npages) +{ + phys_addr_t base_gfn = slot->base_gfn + gfn_offset; + phys_addr_t addr = base_gfn << PAGE_SHIFT; + phys_addr_t end = (base_gfn + npages) << PAGE_SHIFT; + + stage2_apply_range(&kvm->arch.mmu, addr, end, kvm_pgtable_stage2_sync_dirty, false); +} + static void kvm_send_hwpoison_signal(unsigned long address, short lsb) { send_sig_mceerr(BUS_MCEERR_AR, (void __user *)address, lsb, current);