From patchwork Wed Mar 10 09:43:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yanan Wang X-Patchwork-Id: 12127445 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5AB70C433E0 for ; Wed, 10 Mar 2021 09:45:38 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DA40964DAF for ; Wed, 10 Mar 2021 09:45:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DA40964DAF Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:Message-ID:Date:Subject:CC:To:From: Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender :Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Owner; bh=4XX6GQ+qXfjYvFo4ooMSavQuLF7Cy79RBOIblMpL8gE=; b=qFpiAWSRzzN9/R4DzpPljzN16i ntsAKvB8E05jpK+PgMACJysggeDwDGBU37ZfY7h1o6wDlUQtnRxzJadvvgL4TAwGFR46srAygxWKr N2VFGrorrixFOPxKJue9cj9KZwuQ0Xq6kxMBeYYHNp4aAknnEXDcu0eou4GFVyqGfcJ0U+2og5jz0 NqytNqqmZkzmqt/5FY+R1ppDDrvfhLRNTlMlpuKSoqNFHbl0Cg42MZbrW1SUFGfrzrul7iPUppTVG qoGHUmeDzHN1y9hN+lYfuICCgKZUL14v1NJ1PVLScgVxMo1V7h51gLWPspjKb94q9G/DGCNmEHdsu eN9lVWbA==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lJvO3-006UDs-Sl; Wed, 10 Mar 2021 09:44:16 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lJvNV-006U3z-Pr for linux-arm-kernel@lists.infradead.org; Wed, 10 Mar 2021 09:43:46 +0000 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4DwRtg075xzkX1n; Wed, 10 Mar 2021 17:42:03 +0800 (CST) Received: from DESKTOP-TMVL5KK.china.huawei.com (10.174.187.128) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.498.0; Wed, 10 Mar 2021 17:43:21 +0800 From: Yanan Wang To: Marc Zyngier , Will Deacon , "Catalin Marinas" , James Morse , "Julien Thierry" , Suzuki K Poulose , Gavin Shan , Quentin Perret , , , , CC: , , , Yanan Wang Subject: [RFC PATCH v2 0/3] KVM: arm64: Improve efficiency of stage2 page table Date: Wed, 10 Mar 2021 17:43:16 +0800 Message-ID: <20210310094319.18760-1-wangyanan55@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 MIME-Version: 1.0 X-Originating-IP: [10.174.187.128] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210310_094342_731497_E6C4BEBA X-CRM114-Status: GOOD ( 16.23 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hi, This v2 series makes some efficiency improvement of stage2 page table code, and there are some test results to quantify the benefit of each patch. Changelogs: v1->v2: - rebased on top of mainline v5.12-rc2 - also move CMOs of I-cache to the fault handlers - merge patch 2 and patch 3 together - retest this v2 series based on v5.12-rc2 - v1: https://lore.kernel.org/lkml/20210208112250.163568-1-wangyanan55@huawei.com/ About patch 1: We currently uniformly perform CMOs of D-cache and I-cache in user_mem_abort() before calling the fault handlers. If we get concurrent translation faults on the same IPA (page or block), CMOs for the first time is necessary while the others later are not. By moving CMOs to the fault handlers, we can easily identify conditions where they are really needed and avoid the unnecessary ones. As it's a time consuming process to perform CMOs especially when flushing a block range, so this solution reduces much load of kvm and improve efficiency of the stage2 page table code. So let's move both clean of D-cache and invalidation of I-cache to the map path and move only invalidation of I-cache to the permission path. Since the original APIs for CMOs in mmu.c are only called in function user_mem_abort(), we now also move them to pgtable.c. The following results represent the benefit of patch 1 alone, and they were tested by [1](kvm/selftest) that I have posted recently. [1] https://lore.kernel.org/lkml/20210302125751.19080-1-wangyanan55@huawei.com/ When there are muitiple vcpus concurrently accessing the same memory region, we can test the execution time of KVM creating new mappings, updating the permissions of old mappings from RO to RW, and rebuilding the blocks after they have been split. hardware platform: HiSilicon Kunpeng920 Server host kernel: Linux mainline v5.12-rc2 cmdline: ./kvm_page_table_test -m 4 -s anonymous -b 1G -v 80 (80 vcpus, 1G memory, page mappings(normal 4K)) KVM_CREATE_MAPPINGS: before 104.63s -> after 97.30s +7.00% KVM_UPDATE_MAPPINGS: before 78.47s -> after 77.18s +1.64% cmdline: ./kvm_page_table_test -m 4 -s anonymous_thp -b 20G -v 40 (40 vcpus, 20G memory, block mappings(THP 2M)) KVM_CREATE_MAPPINGS: before 15.70s -> after 7.36s +53.12% KVM_UPDATE_MAPPINGS: before 161.00s -> after 135.03s +16.13% KVM_REBUILD_BLOCKS: before 170.49s -> after 145.46s +14.68% cmdline: ./kvm_page_table_test -m 4 -s anonymous_hugetlb_1gb -b 20G -v 40 (40 vcpus, 20G memory, block mappings(HUGETLB 1G)) KVM_CREATE_MAPPINGS: before 104.55s -> after 3.69s +96.47% KVM_UPDATE_MAPPINGS: before 160.67s -> after 130.65s +18.68% KVM_REBUILD_BLOCKS: before 103.95s -> after 2.96s +97.15% About patch 2: If KVM needs to coalesce the existing normal page mappings into a block mapping, we currently follow the following steps successively: 1) invalidate the table entry in the PMD/PUD table 2) flush TLB by VMID 3) unmap the old sub-level tables 4) install the new block entry to the PMD/PUD table It will cost a long time to unmap the numerous old page mappings in step 3, which means there will be a long period when the PMD/PUD table entry could be found invalid (step 1, 2, 3). So the other vcpus have a really big chance to trigger unnecessary translations if they access any page within the block and find the table entry invalid. So let's quickly install the block entry at first to ensure uninterrupted memory access of the other vcpus, and then unmap the page mappings after installation. This will reduce most of the time when the table entry is invalid, and avoid most of the unnecessary translation faults. After this patch the steps can be like: 1) invalidate the table entry in the PMD/PUD table 2) flush TLB by VMID 3) install the new block entry to the PMD/PUD table 4) unmap the old sub-level tables As this patch only affects the rebuilding of block mappings, so we can test the execution time of KVM rebuilding the blocks after they have been split. hardware platform: HiSilicon Kunpeng920 Server host kernel: Linux mainline v5.12-rc2 cmdline: ./kvm_page_table_test -m 4 -s anonymous_thp -b 20G -v 20 (20 vcpus, 20G memory, block mappings(THP 2M)) KVM_REBUILD_BLOCKS: before 73.64s -> after 57.75s +21.58% cmdline: ./kvm_page_table_test -m 4 -s anonymous_thp -b 20G -v 40 (40 vcpus, 20G memory, block mappings(THP 2M)) KVM_REBUILD_BLOCKS: before 145.4s -> after 130.8s +10.62% cmdline: ./kvm_page_table_test -m 4 -s anonymous_hugetlb_1gb -b 1G -v 80 (80 vcpus, 1G memory, block mappings(HUGETLB 1G)) KVM_REBUILD_BLOCKS: before 0.166s -> after 0.035s +78.92% cmdline: ./kvm_page_table_test -m 4 -s anonymous_hugetlb_1gb -b 20G -v 20 (20 vcpus, 20G memory, block mappings(HUGETLB 1G)) KVM_REBUILD_BLOCKS: before 2.875s -> after 0.282s +90.20% cmdline: ./kvm_page_table_test -m 4 -s anonymous_hugetlb_1gb -b 20G -v 40 (40 vcpus, 20G memory, block mappings(HUGETLB 1G)) KVM_REBUILD_BLOCKS: before 2.965s -> after 0.359s +87.55% About patch 3: A new method to distinguish cases of memcache allocations is introduced. By comparing fault_granule and vma_pagesize, cases that require allocations from memcache and cases that don't can be distinguished completely. --- Yanan Wang (3): KVM: arm64: Move CMOs from user_mem_abort to the fault handlers KVM: arm64: Install the block entry before unmapping the page mappings KVM: arm64: Distinguish cases of memcache allocations completely arch/arm64/include/asm/kvm_mmu.h | 31 --------- arch/arm64/kvm/hyp/pgtable.c | 112 +++++++++++++++++++++++-------- arch/arm64/kvm/mmu.c | 48 +++++-------- 3 files changed, 99 insertions(+), 92 deletions(-)