From patchwork Wed May 29 08:42:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liju-clr Chen X-Patchwork-Id: 13678467 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2C0C8C25B7E for ; Wed, 29 May 2024 08:47:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=1VbgPdAbcX48sQwwypLa+5DcCYTlQiO2Z8DSXJlacRM=; b=3M6RL00EMfplAZ m/ufJpZZtHufBcigXFmIZb4Ge07BzZbDrx63wusuTpWo1nv8nw5TVsmdnlrCHX1ZkWPl6olDH33f9 ulXFnnH1XZ217nrG2Y3o90OoAuG7R7SHBkIzmjme/txKz26oI8T/V0vgiC9HuakPpa9GrYGaEQkfQ VHBpIMbWDiPmfN+O9iFSZOQuX8mUAXhSWENwlu19oXQguyAoMUzgzks29arR2Jpf573FF09rAjjl9 au+D+FuxsIyWaNP1Izou6lZqwFnNZh7tHrZ6+u6Sf1DKkN+YssFn00olKTd4JDEvxpcbxft1v/USX RdPN8bUkQswqkaEDEJYA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCExe-00000003Uo1-1cgQ; Wed, 29 May 2024 08:47:06 +0000 Received: from mailgw02.mediatek.com ([216.200.240.185]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCEu9-00000003Sim-1VFV; Wed, 29 May 2024 08:43:36 +0000 X-UUID: 7fc098b81d9711efbf6c7d4f5c147266-20240529 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=kBqE4j5z86E2dnbIeeWXjkmm4QgYtkEAv9tnNpqDh2I=; b=tn4BW0K/To2qrjKMOWd4F8/+EUTcJX9kckxJ9TZJ5N0IFJ50kRCZDAp9hrNAKRniMDRIQPG4WGBO7iiJNII+86rlyx8OnwwlWQF7csbW5mtQHyNnwQXV6p7MeKJctxWTUg5apFsfaVG8GOm2VLT7j1vk5dVPu+xrn5e/4llw8wk=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.39,REQID:1d0477c8-3689-4281-b31a-962cca178376,IP:0,U RL:0,TC:0,Content:-25,EDM:0,RT:0,SF:0,FILE:0,BULK:0,RULE:Release_Ham,ACTIO N:release,TS:-25 X-CID-META: VersionHash:393d96e,CLOUDID:1f1ef587-8d4f-477b-89d2-1e3bdbef96d1,B ulkID:nil,BulkQuantity:0,Recheck:0,SF:102,TC:nil,Content:0,EDM:-3,IP:nil,U RL:11|1,File:nil,RT:nil,Bulk:nil,QS:nil,BEC:nil,COL:0,OSI:0,OSA:0,AV:0,LES :1,SPR:NO,DKR:0,DKP:0,BRR:0,BRE:0,ARC:0 X-CID-BVR: 0 X-CID-BAS: 0,_,0,_ X-CID-FACTOR: TF_CID_SPAM_ULN,TF_CID_SPAM_SNR X-UUID: 7fc098b81d9711efbf6c7d4f5c147266-20240529 Received: from mtkmbs13n1.mediatek.inc [(172.21.101.193)] by mailgw02.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 1774151587; Wed, 29 May 2024 01:43:19 -0700 Received: from mtkmbs11n1.mediatek.inc (172.21.101.185) by mtkmbs11n1.mediatek.inc (172.21.101.185) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.26; Wed, 29 May 2024 16:42:41 +0800 Received: from mtksdccf07.mediatek.inc (172.21.84.99) by mtkmbs11n1.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.1118.26 via Frontend Transport; Wed, 29 May 2024 16:42:41 +0800 From: Liju-clr Chen To: Rob Herring , Krzysztof Kozlowski , Conor Dooley , Jonathan Corbet , Catalin Marinas , Will Deacon , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Richard Cochran , Matthias Brugger , AngeloGioacchino Del Regno , Liju-clr Chen , Yingshiuan Pan , Ze-yu Wang CC: , , , , , , , David Bradil , Trilok Soni , Shawn Hsiao , PeiLun Suei , Chi-shen Yeh , Kevenny Hsieh Subject: [PATCH v11 14/21] virt: geniezone: Optimize performance of protected VM memory Date: Wed, 29 May 2024 16:42:32 +0800 Message-ID: <20240529084239.11478-15-liju-clr.chen@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20240529084239.11478-1-liju-clr.chen@mediatek.com> References: <20240529084239.11478-1-liju-clr.chen@mediatek.com> MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240529_014329_723097_DACC69DF X-CRM114-Status: GOOD ( 23.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Yi-De Wu From: "Yingshiuan Pan" The memory protection mechanism performs better with batch operations on memory pages. To leverage this, we pre-allocate memory for VMs that are set to protected mode. As a result, the memory protection mechanism can proactively protect the pre-allocated memory in advance through batch operations, leading to improved performance during VM booting. Signed-off-by: Yingshiuan Pan Signed-off-by: Jerry Wang Signed-off-by: Liju Chen Signed-off-by: Yi-De Wu --- arch/arm64/geniezone/vm.c | 127 ++++++++++++++++++++++++++ include/linux/soc/mediatek/gzvm_drv.h | 1 + 2 files changed, 128 insertions(+) diff --git a/arch/arm64/geniezone/vm.c b/arch/arm64/geniezone/vm.c index 00c74a4bcfd7..db16716f5b8d 100644 --- a/arch/arm64/geniezone/vm.c +++ b/arch/arm64/geniezone/vm.c @@ -11,6 +11,8 @@ #include #include "gzvm_arch_common.h" +#define PAR_PA47_MASK GENMASK_ULL(47, 12) + /** * gzvm_hypcall_wrapper() - the wrapper for hvc calls * @a0: arguments passed in registers 0 @@ -201,6 +203,126 @@ static int gzvm_vm_ioctl_get_pvmfw_size(struct gzvm *gzvm, return 0; } +/** + * fill_constituents() - Populate pa to buffer until full + * @consti: Pointer to struct mem_region_addr_range. + * @consti_cnt: Constituent count. + * @max_nr_consti: Maximum number of constituent count. + * @gfn: Guest frame number. + * @total_pages: Total page numbers. + * @slot: Pointer to struct gzvm_memslot. + * + * Return: how many pages we've fill in, negative if error + */ +static int fill_constituents(struct mem_region_addr_range *consti, + int *consti_cnt, int max_nr_consti, u64 gfn, + u32 total_pages, struct gzvm_memslot *slot) +{ + u64 pfn = 0, prev_pfn = 0, gfn_end = 0; + int nr_pages = 0; + int i = -1; + + if (unlikely(total_pages == 0)) + return -EINVAL; + gfn_end = gfn + total_pages; + + while (i < max_nr_consti && gfn < gfn_end) { + if (pfn == (prev_pfn + 1)) { + consti[i].pg_cnt++; + } else { + i++; + if (i >= max_nr_consti) + break; + consti[i].address = PFN_PHYS(pfn); + consti[i].pg_cnt = 1; + } + prev_pfn = pfn; + gfn++; + nr_pages++; + } + if (i != max_nr_consti) + i++; + *consti_cnt = i; + + return nr_pages; +} + +/** + * gzvm_vm_populate_mem_region() - Iterate all mem slot and populate pa to + * buffer until it's full + * @gzvm: Pointer to struct gzvm. + * @slot_id: Memory slot id to be populated. + * + * Return: 0 if it is successful, negative if error + */ +int gzvm_vm_populate_mem_region(struct gzvm *gzvm, int slot_id) +{ + struct gzvm_memslot *memslot = &gzvm->memslot[slot_id]; + struct gzvm_memory_region_ranges *region; + int max_nr_consti, remain_pages; + u64 gfn, gfn_end; + u32 buf_size; + + buf_size = PAGE_SIZE * 2; + region = alloc_pages_exact(buf_size, GFP_KERNEL); + if (!region) + return -ENOMEM; + + max_nr_consti = (buf_size - sizeof(*region)) / + sizeof(struct mem_region_addr_range); + + region->slot = memslot->slot_id; + remain_pages = memslot->npages; + gfn = memslot->base_gfn; + gfn_end = gfn + remain_pages; + + while (gfn < gfn_end) { + int nr_pages; + + nr_pages = fill_constituents(region->constituents, + ®ion->constituent_cnt, + max_nr_consti, gfn, + remain_pages, memslot); + + if (nr_pages < 0) { + pr_err("Failed to fill constituents\n"); + free_pages_exact(region, buf_size); + return -EFAULT; + } + + region->gpa = PFN_PHYS(gfn); + region->total_pages = nr_pages; + remain_pages -= nr_pages; + gfn += nr_pages; + + if (gzvm_arch_set_memregion(gzvm->vm_id, buf_size, + virt_to_phys(region))) { + pr_err("Failed to register memregion to hypervisor\n"); + free_pages_exact(region, buf_size); + return -EFAULT; + } + } + free_pages_exact(region, buf_size); + + return 0; +} + +static int populate_all_mem_regions(struct gzvm *gzvm) +{ + int ret, i; + + for (i = 0; i < GZVM_MAX_MEM_REGION; i++) { + if (gzvm->memslot[i].npages == 0) + continue; + + ret = gzvm_vm_populate_mem_region(gzvm, i); + if (ret != 0) + return ret; + } + + return 0; +} + /** * gzvm_vm_ioctl_cap_pvm() - Proceed GZVM_CAP_PROTECTED_VM's subcommands * @gzvm: Pointer to struct gzvm. @@ -222,6 +344,11 @@ static int gzvm_vm_ioctl_cap_pvm(struct gzvm *gzvm, case GZVM_CAP_PVM_SET_PVMFW_GPA: fallthrough; case GZVM_CAP_PVM_SET_PROTECTED_VM: + /* + * To improve performance for protected VM, we have to populate VM's memory + * before VM booting + */ + populate_all_mem_regions(gzvm); ret = gzvm_vm_arch_enable_cap(gzvm, cap, &res); return ret; case GZVM_CAP_PVM_GET_PVMFW_SIZE: diff --git a/include/linux/soc/mediatek/gzvm_drv.h b/include/linux/soc/mediatek/gzvm_drv.h index 982463eea4f6..54ac91670611 100644 --- a/include/linux/soc/mediatek/gzvm_drv.h +++ b/include/linux/soc/mediatek/gzvm_drv.h @@ -159,6 +159,7 @@ int gzvm_vm_ioctl_arch_enable_cap(struct gzvm *gzvm, int gzvm_gfn_to_hva_memslot(struct gzvm_memslot *memslot, u64 gfn, u64 *hva_memslot); +int gzvm_vm_populate_mem_region(struct gzvm *gzvm, int slot_id); int gzvm_vm_ioctl_create_vcpu(struct gzvm *gzvm, u32 cpuid); int gzvm_arch_vcpu_update_one_reg(struct gzvm_vcpu *vcpu, __u64 reg_id,