From patchwork Mon Nov 30 12:18:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yanan Wang X-Patchwork-Id: 11940321 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAF4DC8300F for ; Mon, 30 Nov 2020 12:20:34 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2EAD2206D8 for ; Mon, 30 Nov 2020 12:20:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="qD6rRGxp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2EAD2206D8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=IgQ6IR8R0/moeAokmgtuKlhf9mMGRJ0f3XVXzB6/VAI=; b=qD6rRGxpkJ+dK7dJGauY2i9ng Z4LbUFR8K6syqWMNMItZOLA9FeBS92vK5we7rhUJgCphOKUiZqK2/CdL2W7bXEwsGH2Nk4ojed2YQ JHTuMm/xSAH+I8Xm3u4uxyfLgCfhMTNH0oiL1fhDl+A3tJ5eUDdl+d/aTDX1YBMvW1L7zfp03wbj8 uTKejSDyGf+4qiqCI8SvxEIGOlxjt65vdzKuaDEyOHuJu8GiHbJLsJfEZk7XEoe0AtQFXrXyRFR0N gyCzcH/dz5l9HvjmSb4Al0Su7bktY2EKEBaAmwNA9BFw6phx3WbvDrQFAZldotoNncsPXvC2XpSfo 6Wf8pCbSA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kji9F-0005ft-Tr; Mon, 30 Nov 2020 12:19:17 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kji9C-0005em-6O for linux-arm-kernel@lists.infradead.org; Mon, 30 Nov 2020 12:19:15 +0000 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Cl45T31PPzLxpp; Mon, 30 Nov 2020 20:18:37 +0800 (CST) Received: from DESKTOP-TMVL5KK.china.huawei.com (10.174.186.123) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.487.0; Mon, 30 Nov 2020 20:19:02 +0800 From: Yanan Wang To: , , Marc Zyngier , Catalin Marinas , Will Deacon , James Morse , "Julien Thierry" , Suzuki K Poulose , Gavin Shan , Quentin Perret Subject: [RFC PATCH 1/3] KVM: arm64: Fix possible memory leak in kvm stage2 Date: Mon, 30 Nov 2020 20:18:45 +0800 Message-ID: <20201130121847.91808-2-wangyanan55@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20201130121847.91808-1-wangyanan55@huawei.com> References: <20201130121847.91808-1-wangyanan55@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.186.123] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201130_071914_745188_3CE26E6A X-CRM114-Status: GOOD ( 10.66 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: lushenming@huawei.com, jiangkunkun@huawei.com, Yanan Wang , yezengruan@huawei.com, wangjingyi11@huawei.com, yuzenghui@huawei.com, wanghaibin.wang@huawei.com, zhukeqian1@huawei.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When installing a new leaf pte onto an invalid ptep, we need to get_page(ptep). When just updating a valid leaf ptep, we shouldn't get_page(ptep). Incorrect page_count of translation tables might lead to memory leak, when unmapping a stage 2 memory range. Signed-off-by: Yanan Wang --- arch/arm64/kvm/hyp/pgtable.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 0271b4a3b9fe..696b6aa83faf 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -186,6 +186,7 @@ static bool kvm_set_valid_leaf_pte(kvm_pte_t *ptep, u64 pa, kvm_pte_t attr, return old == pte; smp_store_release(ptep, pte); + get_page(virt_to_page(ptep)); return true; } @@ -476,6 +477,7 @@ static bool stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level, /* There's an existing valid leaf entry, so perform break-before-make */ kvm_set_invalid_pte(ptep); kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, level); + put_page(virt_to_page(ptep)); kvm_set_valid_leaf_pte(ptep, phys, data->attr, level); out: data->phys += granule; @@ -512,7 +514,7 @@ static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, } if (stage2_map_walker_try_leaf(addr, end, level, ptep, data)) - goto out_get_page; + return 0; if (WARN_ON(level == KVM_PGTABLE_MAX_LEVELS - 1)) return -EINVAL; @@ -536,9 +538,8 @@ static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, } kvm_set_table_pte(ptep, childp); - -out_get_page: get_page(page); + return 0; } From patchwork Mon Nov 30 12:18:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yanan Wang X-Patchwork-Id: 11940323 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1F89C64E8A for ; Mon, 30 Nov 2020 12:20:52 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 22827206D8 for ; Mon, 30 Nov 2020 12:20:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="NuH1jF6p" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 22827206D8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=reeiwdEN83z1j7QjNvfzGkXXONd9XhhSsnuxxoVtPXg=; b=NuH1jF6p3hiqOtIxfki0WVJ+d WNCqdD+B8QNmc+BEw8l0EjL9ZthMZVlzadq1UPKBLXL017hSH4G+XFwYXwD3OWb8OsVYlsLHZcaa+ hpAV2RjnW+E0XICULxQfS9eXVYsxvwwmtVjCej5upD0MJjp8Eb8HgJ8Tqaf1HQUhzPUfnXI4W5Wrs kWjHBan55HlZf2stja/i7y3wzlGEzl7ikPulGnFiOSpY9CH1gzX187D9mqrCxpZ6Wk4EMU1GyclyT wMnCGTR8mOA3zHhAS0Nm5TbBOwVY1Bet14sDoQ/zo0gJoLNAPuZ6gcL7AGzPts4J8zIDcmg1TXpF3 bBFnwzw/Q==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kji9S-0005i7-95; Mon, 30 Nov 2020 12:19:30 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kji9H-0005fW-Es for linux-arm-kernel@lists.infradead.org; Mon, 30 Nov 2020 12:19:21 +0000 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Cl45j6WPjz15K9J; Mon, 30 Nov 2020 20:18:49 +0800 (CST) Received: from DESKTOP-TMVL5KK.china.huawei.com (10.174.186.123) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.487.0; Mon, 30 Nov 2020 20:19:08 +0800 From: Yanan Wang To: , , Marc Zyngier , Catalin Marinas , Will Deacon , James Morse , "Julien Thierry" , Suzuki K Poulose , Gavin Shan , Quentin Perret Subject: [RFC PATCH 2/3] KVM: arm64: Fix handling of merging tables into a block entry Date: Mon, 30 Nov 2020 20:18:46 +0800 Message-ID: <20201130121847.91808-3-wangyanan55@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20201130121847.91808-1-wangyanan55@huawei.com> References: <20201130121847.91808-1-wangyanan55@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.186.123] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201130_071920_309538_26C1D706 X-CRM114-Status: GOOD ( 16.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: lushenming@huawei.com, jiangkunkun@huawei.com, Yanan Wang , yezengruan@huawei.com, wangjingyi11@huawei.com, yuzenghui@huawei.com, wanghaibin.wang@huawei.com, zhukeqian1@huawei.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In dirty logging case(logging_active == True), we need to collapse a block entry into a table if necessary. After dirty logging is canceled, when merging tables back into a block entry, we should not only free the non-huge page tables but also unmap the non-huge mapping for the block. Without the unmap, inconsistent TLB entries for the pages in the the block will be created. We could also use unmap_stage2_range API to unmap the non-huge mapping, but this could potentially free the upper level page-table page which will be useful later. Signed-off-by: Yanan Wang --- arch/arm64/kvm/hyp/pgtable.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 696b6aa83faf..fec8dc9f2baa 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -500,6 +500,9 @@ static int stage2_map_walk_table_pre(u64 addr, u64 end, u32 level, return 0; } +static void stage2_flush_dcache(void *addr, u64 size); +static bool stage2_pte_cacheable(kvm_pte_t pte); + static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, struct stage2_map_data *data) { @@ -507,9 +510,17 @@ static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, struct page *page = virt_to_page(ptep); if (data->anchor) { - if (kvm_pte_valid(pte)) + if (kvm_pte_valid(pte)) { + kvm_set_invalid_pte(ptep); + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, + addr, level); put_page(page); + if (stage2_pte_cacheable(pte)) + stage2_flush_dcache(kvm_pte_follow(pte), + kvm_granule_size(level)); + } + return 0; } @@ -574,7 +585,7 @@ static int stage2_map_walk_table_post(u64 addr, u64 end, u32 level, * The behaviour of the LEAF callback then depends on whether or not the * anchor has been set. If not, then we're not using a block mapping higher * up the table and we perform the mapping at the existing leaves instead. - * If, on the other hand, the anchor _is_ set, then we drop references to + * If, on the other hand, the anchor _is_ set, then we unmap the mapping of * all valid leaves so that the pages beneath the anchor can be freed. * * Finally, the TABLE_POST callback does nothing if the anchor has not From patchwork Mon Nov 30 12:18:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yanan Wang X-Patchwork-Id: 11940325 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E135DC64E7B for ; Mon, 30 Nov 2020 12:20:57 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6DBC9206D8 for ; Mon, 30 Nov 2020 12:20:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="aw3vMGaL" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6DBC9206D8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=r2wiL3+3dqoLTNY01iN517Jq/T2nnSJVnzRk+SwsKIQ=; b=aw3vMGaLOZBfTsm/PAmNWwh1I irOVNcFjYRcUtDA7uFZBahXajU7MxHK6qdn50dEBSqtrx6OddN2vGQr8MyDD5w2calsdCNOpSYP7a rfcZoS2pNxfRT3PEk6lNbjaauuhX1RhD5OkvARdwqiSEzJrsBSji8APScxxuipyFMyC7ZTCVkI9e2 mzC+C+GkhIJMO5rPvAO3MPhVIT3G0BCqX8IUClGOcRG8KIoCjaNtVf0NzVoMzLQVEeZscPFAxUsrj abx3GbB+UzwJ31ArYekzwtK9b7VrRHmY+iBYEnBbTmhRcf1iJPYISeiNmEouSF4rikyb69cGcr8M6 6G1/U0cFQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kji9W-0005j8-NP; Mon, 30 Nov 2020 12:19:34 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kji9O-0005gX-4s for linux-arm-kernel@lists.infradead.org; Mon, 30 Nov 2020 12:19:27 +0000 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4Cl45q2cDnz711V; Mon, 30 Nov 2020 20:18:55 +0800 (CST) Received: from DESKTOP-TMVL5KK.china.huawei.com (10.174.186.123) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.487.0; Mon, 30 Nov 2020 20:19:10 +0800 From: Yanan Wang To: , , Marc Zyngier , Catalin Marinas , Will Deacon , James Morse , "Julien Thierry" , Suzuki K Poulose , Gavin Shan , Quentin Perret Subject: [RFC PATCH 3/3] KVM: arm64: Add usage of stage 2 fault lookup level in user_mem_abort() Date: Mon, 30 Nov 2020 20:18:47 +0800 Message-ID: <20201130121847.91808-4-wangyanan55@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20201130121847.91808-1-wangyanan55@huawei.com> References: <20201130121847.91808-1-wangyanan55@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.186.123] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201130_071926_722039_D7FDF5E1 X-CRM114-Status: GOOD ( 18.01 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: lushenming@huawei.com, jiangkunkun@huawei.com, Yanan Wang , yezengruan@huawei.com, wangjingyi11@huawei.com, yuzenghui@huawei.com, wanghaibin.wang@huawei.com, zhukeqian1@huawei.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org If we get a FSC_PERM fault, just using (logging_active && writable) to determine calling kvm_pgtable_stage2_map(). There will be two more cases we should consider. (1) After logging_active is configged back to false from true. When we get a FSC_PERM fault with write_fault and adjustment of hugepage is needed, we should merge tables back to a block entry. This case is ignored by still calling kvm_pgtable_stage2_relax_perms(), which will lead to an endless loop and guest panic due to soft lockup. (2) We use (FSC_PERM && logging_active && writable) to determine collapsing a block entry into a table by calling kvm_pgtable_stage2_map(). But sometimes we may only need to relax permissions when trying to write to a page other than a block. In this condition, using kvm_pgtable_stage2_relax_perms() will be fine. The ISS filed bit[1:0] in ESR_EL2 regesiter indicates the stage2 lookup level at which a D-abort or I-abort occured. By comparing granule of the fault lookup level with vma_pagesize, we can strictly distinguish conditions of calling kvm_pgtable_stage2_relax_perms() or kvm_pgtable_stage2_map(), and the above two cases will be well considered. Suggested-by: Keqian Zhu Signed-off-by: Yanan Wang --- arch/arm64/include/asm/esr.h | 1 + arch/arm64/include/asm/kvm_emulate.h | 5 +++++ arch/arm64/kvm/mmu.c | 11 +++++++++-- 3 files changed, 15 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h index 22c81f1edda2..85a3e49f92f4 100644 --- a/arch/arm64/include/asm/esr.h +++ b/arch/arm64/include/asm/esr.h @@ -104,6 +104,7 @@ /* Shared ISS fault status code(IFSC/DFSC) for Data/Instruction aborts */ #define ESR_ELx_FSC (0x3F) #define ESR_ELx_FSC_TYPE (0x3C) +#define ESR_ELx_FSC_LEVEL (0x03) #define ESR_ELx_FSC_EXTABT (0x10) #define ESR_ELx_FSC_SERROR (0x11) #define ESR_ELx_FSC_ACCESS (0x08) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 5ef2669ccd6c..2e0e8edf6306 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -350,6 +350,11 @@ static __always_inline u8 kvm_vcpu_trap_get_fault_type(const struct kvm_vcpu *vc return kvm_vcpu_get_esr(vcpu) & ESR_ELx_FSC_TYPE; } +static __always_inline u8 kvm_vcpu_trap_get_fault_level(const struct kvm_vcpu *vcpu) +{ + return kvm_vcpu_get_esr(vcpu) & ESR_ELx_FSC_LEVEL; +{ + static __always_inline bool kvm_vcpu_abt_issea(const struct kvm_vcpu *vcpu) { switch (kvm_vcpu_trap_get_fault(vcpu)) { diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 1a01da9fdc99..75814a02d189 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -754,10 +754,12 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, gfn_t gfn; kvm_pfn_t pfn; bool logging_active = memslot_is_logging(memslot); - unsigned long vma_pagesize; + unsigned long fault_level = kvm_vcpu_trap_get_fault_level(vcpu); + unsigned long vma_pagesize, fault_granule; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; + fault_granule = 1UL << ARM64_HW_PGTABLE_LEVEL_SHIFT(fault_level); write_fault = kvm_is_write_fault(vcpu); exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu); VM_BUG_ON(write_fault && exec_fault); @@ -896,7 +898,12 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, else if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC)) prot |= KVM_PGTABLE_PROT_X; - if (fault_status == FSC_PERM && !(logging_active && writable)) { + /* + * Under the premise of getting a FSC_PERM fault, we just need to relax + * permissions only if vma_pagesize equals fault_granule. Otherwise, + * kvm_pgtable_stage2_map() should be called to change block size. + */ + if (fault_status == FSC_PERM && vma_pagesize == fault_granule) { ret = kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot); } else { ret = kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize,