From patchwork Mon Jul 2 15:02:50 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10501795 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 719CC6028F for ; Mon, 2 Jul 2018 15:04:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4223B28B15 for ; Mon, 2 Jul 2018 15:04:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 364B728B20; Mon, 2 Jul 2018 15:04:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id DA1D528B15 for ; Mon, 2 Jul 2018 15:04:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=lKqZR2eDTKo2eb0BSqVd9LG2+WLy9cmT4b7vedo9Ktk=; b=q+4iigXPksfc8czzT4wEIFTHrr SMLVneaaQHBaIm6nA1404Qq+GlBj7Vyynn0g0LbzyyaTQzqKD3DwoXrbTy8mNHmBs/qeZmrDVpLPt MgvUdZputrEnOt2KfrZTmoheyXL1DVv5MDg3iu+4VP9r2gIfPAJw506QQXXF716IIwc6KNYToVk3k ZyZAztdzB50sjYVXR3tV+lPvUmCPGwwPXzbjVEOKfz1PgfNj/cj9ItlcYik/qTXA0Q4B6uxvLgXCF H9R74h/znw4HxndyoH97kUOe0Mm5QwtkY1tybWlBZJypItBx5PYDlzWjQf+T1HbtiD3qeThMyZM3i FIp93X9g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fa0NW-0002mo-5x; Mon, 02 Jul 2018 15:04:34 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fa0MN-0001he-U5 for linux-arm-kernel@lists.infradead.org; Mon, 02 Jul 2018 15:03:31 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 772A516A3; Mon, 2 Jul 2018 08:03:11 -0700 (PDT) Received: from approximate.cambridge.arm.com (approximate.cambridge.arm.com [10.1.206.75]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 23B503F2EA; Mon, 2 Jul 2018 08:03:10 -0700 (PDT) From: Marc Zyngier To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 6/6] KVM: arm/arm64: Remove unnecessary CMOs when creating HYP page tables Date: Mon, 2 Jul 2018 16:02:50 +0100 Message-Id: <20180702150250.16550-7-marc.zyngier@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180702150250.16550-1-marc.zyngier@arm.com> References: <20180702150250.16550-1-marc.zyngier@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180702_080324_678116_DC86A22A X-CRM114-Status: GOOD ( 11.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Catalin Marinas , Christoffer Dall , Suzuki K Poulose MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP There is no need to perform cache maintenance operations when creating the HYP page tables if we have the multiprocessing extensions. ARMv7 mandates them with the virtualization support, and ARMv8 just mandates them unconditionally. Let's remove these operations. Acked-by: Mark Rutland Acked-by: Christoffer Dall Signed-off-by: Marc Zyngier --- virt/kvm/arm/mmu.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index eade30caaa3c..97d27cd9c654 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -609,7 +609,6 @@ static void create_hyp_pte_mappings(pmd_t *pmd, unsigned long start, pte = pte_offset_kernel(pmd, addr); kvm_set_pte(pte, pfn_pte(pfn, prot)); get_page(virt_to_page(pte)); - kvm_flush_dcache_to_poc(pte, sizeof(*pte)); pfn++; } while (addr += PAGE_SIZE, addr != end); } @@ -636,7 +635,6 @@ static int create_hyp_pmd_mappings(pud_t *pud, unsigned long start, } kvm_pmd_populate(pmd, pte); get_page(virt_to_page(pmd)); - kvm_flush_dcache_to_poc(pmd, sizeof(*pmd)); } next = pmd_addr_end(addr, end); @@ -669,7 +667,6 @@ static int create_hyp_pud_mappings(pgd_t *pgd, unsigned long start, } kvm_pud_populate(pud, pmd); get_page(virt_to_page(pud)); - kvm_flush_dcache_to_poc(pud, sizeof(*pud)); } next = pud_addr_end(addr, end); @@ -706,7 +703,6 @@ static int __create_hyp_mappings(pgd_t *pgdp, unsigned long ptrs_per_pgd, } kvm_pgd_populate(pgd, pud); get_page(virt_to_page(pgd)); - kvm_flush_dcache_to_poc(pgd, sizeof(*pgd)); } next = pgd_addr_end(addr, end);