From patchwork Fri Mar 26 03:16:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yanan Wang X-Patchwork-Id: 12165717 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 893AEC433DB for ; Fri, 26 Mar 2021 03:19:35 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2254261A28 for ; Fri, 26 Mar 2021 03:19:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2254261A28 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=j9g9HIcbcBBv/ke+lhsqKK1HoG2gIw7YrFq7Mb1Yxvg=; b=OieishBxmRWhCZRXdAQHdJqmi AW+W166RpI9ZLLBYlRSjdTh3B87noP339Rb4XieI+UqPStIukpISnWEWz6O/21+eDJpfb2F1o7rhz hw+FLzgo0lYAEtoSMjfgLxTkbRAcNnbfttEVlK55Xihhd13oKDmN4U4V+FH11+2VOz0FR7XKZ+vgV Dy2ZBUKAiTlkK4rFMN7kfvPzXB7plmbX/VDqXASEgX2EergU1Hsl9T7m9FOmbqi2UiICB7vnes5Du azZUsiOXZIGKTptRDQcs0lc/GlX4ADsYYs0z3ijJUMAAWv0F0P506q7mLItvMM85Sy3JZl8fbNF2K wZo2c2cSQ==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lPcyt-002fyY-93; Fri, 26 Mar 2021 03:17:51 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lPcyd-002fwR-Td for linux-arm-kernel@lists.infradead.org; Fri, 26 Mar 2021 03:17:40 +0000 Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4F66Xg4vQhzPmrt; Fri, 26 Mar 2021 11:14:59 +0800 (CST) Received: from DESKTOP-TMVL5KK.china.huawei.com (10.174.187.128) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.498.0; Fri, 26 Mar 2021 11:17:26 +0800 From: Yanan Wang To: Marc Zyngier , Will Deacon , "Alexandru Elisei" , Catalin Marinas , , , , CC: James Morse , Julien Thierry , Suzuki K Poulose , Gavin Shan , Quentin Perret , , , , Yanan Wang Subject: [RFC PATCH v3 1/2] KVM: arm64: Move CMOs from user_mem_abort to the fault handlers Date: Fri, 26 Mar 2021 11:16:53 +0800 Message-ID: <20210326031654.3716-2-wangyanan55@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210326031654.3716-1-wangyanan55@huawei.com> References: <20210326031654.3716-1-wangyanan55@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.128] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210326_031737_741021_D8738194 X-CRM114-Status: GOOD ( 20.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We currently uniformly permorm CMOs of D-cache and I-cache in function user_mem_abort before calling the fault handlers. If we get concurrent guest faults(e.g. translation faults, permission faults) or some really unnecessary guest faults caused by BBM, CMOs for the first vcpu are necessary while the others later are not. By moving CMOs to the fault handlers, we can easily identify conditions where they are really needed and avoid the unnecessary ones. As it's a time consuming process to perform CMOs especially when flushing a block range, so this solution reduces much load of kvm and improve efficiency of the page table code. So let's move both clean of D-cache and invalidation of I-cache to the map path and move only invalidation of I-cache to the permission path. Since the original APIs for CMOs in mmu.c are only called in function user_mem_abort, we now also move them to pgtable.c. Signed-off-by: Yanan Wang --- arch/arm64/include/asm/kvm_mmu.h | 31 --------------- arch/arm64/kvm/hyp/pgtable.c | 68 +++++++++++++++++++++++++------- arch/arm64/kvm/mmu.c | 23 ++--------- 3 files changed, 57 insertions(+), 65 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 90873851f677..c31f88306d4e 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -177,37 +177,6 @@ static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu) return (vcpu_read_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101; } -static inline void __clean_dcache_guest_page(kvm_pfn_t pfn, unsigned long size) -{ - void *va = page_address(pfn_to_page(pfn)); - - /* - * With FWB, we ensure that the guest always accesses memory using - * cacheable attributes, and we don't have to clean to PoC when - * faulting in pages. Furthermore, FWB implies IDC, so cleaning to - * PoU is not required either in this case. - */ - if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB)) - return; - - kvm_flush_dcache_to_poc(va, size); -} - -static inline void __invalidate_icache_guest_page(kvm_pfn_t pfn, - unsigned long size) -{ - if (icache_is_aliasing()) { - /* any kind of VIPT cache */ - __flush_icache_all(); - } else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) { - /* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */ - void *va = page_address(pfn_to_page(pfn)); - - invalidate_icache_range((unsigned long)va, - (unsigned long)va + size); - } -} - void kvm_set_way_flush(struct kvm_vcpu *vcpu); void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled); diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 4d177ce1d536..829a34eea526 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -464,6 +464,43 @@ static int stage2_map_set_prot_attr(enum kvm_pgtable_prot prot, return 0; } +static bool stage2_pte_cacheable(kvm_pte_t pte) +{ + u64 memattr = pte & KVM_PTE_LEAF_ATTR_LO_S2_MEMATTR; + return memattr == PAGE_S2_MEMATTR(NORMAL); +} + +static bool stage2_pte_executable(kvm_pte_t pte) +{ + return !(pte & KVM_PTE_LEAF_ATTR_HI_S2_XN); +} + +static void stage2_flush_dcache(void *addr, u64 size) +{ + /* + * With FWB, we ensure that the guest always accesses memory using + * cacheable attributes, and we don't have to clean to PoC when + * faulting in pages. Furthermore, FWB implies IDC, so cleaning to + * PoU is not required either in this case. + */ + if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB)) + return; + + __flush_dcache_area(addr, size); +} + +static void stage2_invalidate_icache(void *addr, u64 size) +{ + if (icache_is_aliasing()) { + /* Flush any kind of VIPT icache */ + __flush_icache_all(); + } else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) { + /* PIPT or VPIPT at EL2 */ + invalidate_icache_range((unsigned long)addr, + (unsigned long)addr + size); + } +} + static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, struct stage2_map_data *data) @@ -495,6 +532,13 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level, put_page(page); } + /* Perform CMOs before installation of the new PTE */ + if (!kvm_pte_valid(old) || stage2_pte_cacheable(old)) + stage2_flush_dcache(__va(phys), granule); + + if (stage2_pte_executable(new)) + stage2_invalidate_icache(__va(phys), granule); + smp_store_release(ptep, new); get_page(page); data->phys += granule; @@ -651,20 +695,6 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, return ret; } -static void stage2_flush_dcache(void *addr, u64 size) -{ - if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB)) - return; - - __flush_dcache_area(addr, size); -} - -static bool stage2_pte_cacheable(kvm_pte_t pte) -{ - u64 memattr = pte & KVM_PTE_LEAF_ATTR_LO_S2_MEMATTR; - return memattr == PAGE_S2_MEMATTR(NORMAL); -} - static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, enum kvm_pgtable_walk_flags flag, void * const arg) @@ -743,8 +773,16 @@ static int stage2_attr_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, * but worst-case the access flag update gets lost and will be * set on the next access instead. */ - if (data->pte != pte) + if (data->pte != pte) { + /* + * Invalidate the instruction cache before updating + * if we are going to add the executable permission. + */ + if (!stage2_pte_executable(*ptep) && stage2_pte_executable(pte)) + stage2_invalidate_icache(kvm_pte_follow(pte), + kvm_granule_size(level)); WRITE_ONCE(*ptep, pte); + } return 0; } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 77cb2d28f2a4..1eec9f63bc6f 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -609,16 +609,6 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask); } -static void clean_dcache_guest_page(kvm_pfn_t pfn, unsigned long size) -{ - __clean_dcache_guest_page(pfn, size); -} - -static void invalidate_icache_guest_page(kvm_pfn_t pfn, unsigned long size) -{ - __invalidate_icache_guest_page(pfn, size); -} - static void kvm_send_hwpoison_signal(unsigned long address, short lsb) { send_sig_mceerr(BUS_MCEERR_AR, (void __user *)address, lsb, current); @@ -882,13 +872,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (writable) prot |= KVM_PGTABLE_PROT_W; - if (fault_status != FSC_PERM && !device) - clean_dcache_guest_page(pfn, vma_pagesize); - - if (exec_fault) { + if (exec_fault) prot |= KVM_PGTABLE_PROT_X; - invalidate_icache_guest_page(pfn, vma_pagesize); - } if (device) prot |= KVM_PGTABLE_PROT_DEVICE; @@ -1144,10 +1129,10 @@ int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte) trace_kvm_set_spte_hva(hva); /* - * We've moved a page around, probably through CoW, so let's treat it - * just like a translation fault and clean the cache to the PoC. + * We've moved a page around, probably through CoW, so let's treat + * it just like a translation fault and the map handler will clean + * the cache to the PoC. */ - clean_dcache_guest_page(pfn, PAGE_SIZE); handle_hva_to_gpa(kvm, hva, end, &kvm_set_spte_handler, &pfn); return 0; } From patchwork Fri Mar 26 03:16:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yanan Wang X-Patchwork-Id: 12165719 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E36A3C433C1 for ; Fri, 26 Mar 2021 03:19:52 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8484361A27 for ; Fri, 26 Mar 2021 03:19:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8484361A27 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=fp37d8Og48iXPEwfNsbotTNEb8dRxUj0bN2o3LHq9Io=; b=pCZ1iJ6/zm8aVCx1xVuPiZDY/ QSPAUJX1YJzepnqCQTp8FZL1FFxblp5rSP0xiXmiWXflRwnELUA5x7YyX4CkECOYjVaORCu9SYiHS 2Jew+vBMl9TpsvNu3mrcVf/7XLIYhjFa4hC6F/xk1GCMJkh+h8+1kMQPYtqOicXA0oISj4Lnfi5DI nFuTTVUTJvRu9KL1WGhA6gwV30raqyNue0CvrX1LX+e4SGd2W3Ifi+tgmzDNEhh+ogSYsuQnw7C5+ 39ib3KGIjzTObp9cjswWMnXMCPJgz/27BhZsYAzxQm/9CIeqD2LKWJvU3TfV1l39/lubf39poQvdK sQ7BOQ4Bg==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lPcz4-002g05-Tz; Fri, 26 Mar 2021 03:18:04 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lPcyi-002fxL-4x for linux-arm-kernel@lists.infradead.org; Fri, 26 Mar 2021 03:17:42 +0000 Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4F66YP5pkqzyNyM; Fri, 26 Mar 2021 11:15:37 +0800 (CST) Received: from DESKTOP-TMVL5KK.china.huawei.com (10.174.187.128) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.498.0; Fri, 26 Mar 2021 11:17:28 +0800 From: Yanan Wang To: Marc Zyngier , Will Deacon , "Alexandru Elisei" , Catalin Marinas , , , , CC: James Morse , Julien Thierry , Suzuki K Poulose , Gavin Shan , Quentin Perret , , , , Yanan Wang Subject: [RFC PATCH v3 2/2] KVM: arm64: Distinguish cases of memcache allocations completely Date: Fri, 26 Mar 2021 11:16:54 +0800 Message-ID: <20210326031654.3716-3-wangyanan55@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210326031654.3716-1-wangyanan55@huawei.com> References: <20210326031654.3716-1-wangyanan55@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.128] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210326_031741_163732_6535A177 X-CRM114-Status: GOOD ( 17.14 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org With a guest translation fault, the memcache pages are not needed if KVM is only about to install a new leaf entry into the existing page table. And with a guest permission fault, the memcache pages are also not needed for a write_fault in dirty-logging time if KVM is only about to update the existing leaf entry instead of collapsing a block entry into a table. By comparing fault_granule and vma_pagesize, cases that require allocations from memcache and cases that don't can be distinguished completely. Signed-off-by: Yanan Wang --- arch/arm64/kvm/mmu.c | 25 ++++++++++++------------- 1 file changed, 12 insertions(+), 13 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 1eec9f63bc6f..05af40dc60c1 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -810,19 +810,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, gfn = fault_ipa >> PAGE_SHIFT; mmap_read_unlock(current->mm); - /* - * Permission faults just need to update the existing leaf entry, - * and so normally don't require allocations from the memcache. The - * only exception to this is when dirty logging is enabled at runtime - * and a write fault needs to collapse a block entry into a table. - */ - if (fault_status != FSC_PERM || (logging_active && write_fault)) { - ret = kvm_mmu_topup_memory_cache(memcache, - kvm_mmu_cache_min_pages(kvm)); - if (ret) - return ret; - } - mmu_seq = vcpu->kvm->mmu_notifier_seq; /* * Ensure the read of mmu_notifier_seq happens before we call @@ -880,6 +867,18 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, else if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC)) prot |= KVM_PGTABLE_PROT_X; + /* + * Allocations from the memcache are required only when granule of the + * lookup level where the guest fault happened exceeds vma_pagesize, + * which means new page tables will be created in the fault handlers. + */ + if (fault_granule > vma_pagesize) { + ret = kvm_mmu_topup_memory_cache(memcache, + kvm_mmu_cache_min_pages(kvm)); + if (ret) + return ret; + } + /* * Under the premise of getting a FSC_PERM fault, we just need to relax * permissions only if vma_pagesize equals fault_granule. Otherwise,