From patchwork Mon Aug 13 10:43:50 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Punit Agrawal X-Patchwork-Id: 10564065 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EBCA014E2 for ; Mon, 13 Aug 2018 10:44:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DD048284BF for ; Mon, 13 Aug 2018 10:44:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D07202883F; Mon, 13 Aug 2018 10:44:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C9963284BF for ; Mon, 13 Aug 2018 10:44:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=gc4IGQV0xhIH0MhzYT5VmbuWCc9rcFwo6YO6oH6FwA8=; b=EskuUBQrH1HQuaU6ZgFXE/Nqg+ QSHAmLJcQAS2FdsomrFsX4Dy6DfjcEU5Em4q/Gw65HM1nxD4vp0F8VIRsebNLUyvgq5UV/KnOf0kj Cu3PJzk8dTDIPr82IYbxuH0KFzsAjIdS9uylP8BzybECMhgMBeIhjHAmFopK2qpntIW1NcaM+82Hc sgRgqlZ/q32LBMDIJT5B9yLDS7/glyOvezM9nLUK4OWMo9+LuAhMAG/yU3Ymkj2HActXIiIo71OOQ fHl0A6ikKDi/KjmsB7a+fx9Q6HBkUPvz841uj0tL3gtmCVE/9A4QXpdIyMD4Gg0/Z0g3hJZ5sziig 2LFOlqnQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fpAKh-0007WA-FA; Mon, 13 Aug 2018 10:44:19 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fpAKe-0007VY-8J for linux-arm-kernel@lists.infradead.org; Mon, 13 Aug 2018 10:44:18 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 52A8615A2; Mon, 13 Aug 2018 03:44:15 -0700 (PDT) Received: from localhost (e105922-lin.emea.arm.com [10.4.13.28]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EBBE73F5D0; Mon, 13 Aug 2018 03:44:14 -0700 (PDT) From: Punit Agrawal To: kvmarm@lists.cs.columbia.edu Subject: [PATCH v3 1/2] KVM: arm/arm64: Skip updating PMD entry if no change Date: Mon, 13 Aug 2018 11:43:50 +0100 Message-Id: <20180813104351.24595-2-punit.agrawal@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813104351.24595-1-punit.agrawal@arm.com> References: <20180813104351.24595-1-punit.agrawal@arm.com> X-ARM-No-Footer: FoSSMail X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180813_034416_303171_673D1FE3 X-CRM114-Status: GOOD ( 18.18 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: suzuki.poulose@arm.com, marc.zyngier@arm.com, Punit Agrawal , christoffer.dall@arm.com, stable@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Contention on updating a PMD entry by a large number of vcpus can lead to duplicate work when handling stage 2 page faults. As the page table update follows the break-before-make requirement of the architecture, it can lead to repeated refaults due to clearing the entry and flushing the tlbs. This problem is more likely when - * there are large number of vcpus * the mapping is large block mapping such as when using PMD hugepages (512MB) with 64k pages. Fix this by skipping the page table update if there is no change in the entry being updated. Fixes: ad361f093c1e ("KVM: ARM: Support hugetlbfs backed huge pages") Signed-off-by: Punit Agrawal Reviewed-by: Suzuki Poulose Cc: Marc Zyngier Cc: Christoffer Dall Cc: stable@vger.kernel.org --- virt/kvm/arm/mmu.c | 38 +++++++++++++++++++++++++++----------- 1 file changed, 27 insertions(+), 11 deletions(-) diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index 1d90d79706bd..2bb0b5dba412 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1015,19 +1015,35 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache pmd = stage2_get_pmd(kvm, cache, addr); VM_BUG_ON(!pmd); - /* - * Mapping in huge pages should only happen through a fault. If a - * page is merged into a transparent huge page, the individual - * subpages of that huge page should be unmapped through MMU - * notifiers before we get here. - * - * Merging of CompoundPages is not supported; they should become - * splitting first, unmapped, merged, and mapped back in on-demand. - */ - VM_BUG_ON(pmd_present(*pmd) && pmd_pfn(*pmd) != pmd_pfn(*new_pmd)); - old_pmd = *pmd; if (pmd_present(old_pmd)) { + /* + * Multiple vcpus faulting on the same PMD entry, can + * lead to them sequentially updating the PMD with the + * same value. Following the break-before-make + * (pmd_clear() followed by tlb_flush()) process can + * hinder forward progress due to refaults generated + * on missing translations. + * + * Skip updating the page table if the entry is + * unchanged. + */ + if (pmd_val(old_pmd) == pmd_val(*new_pmd)) + return 0; + + /* + * Mapping in huge pages should only happen through a + * fault. If a page is merged into a transparent huge + * page, the individual subpages of that huge page + * should be unmapped through MMU notifiers before we + * get here. + * + * Merging of CompoundPages is not supported; they + * should become splitting first, unmapped, merged, + * and mapped back in on-demand. + */ + VM_BUG_ON(pmd_pfn(old_pmd) != pmd_pfn(*new_pmd)); + pmd_clear(pmd); kvm_tlb_flush_vmid_ipa(kvm, addr); } else {