From patchwork Mon Oct 1 15:54:36 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Punit Agrawal X-Patchwork-Id: 10622385 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6A98B13BB for ; Mon, 1 Oct 2018 15:57:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4D30128DBC for ; Mon, 1 Oct 2018 15:57:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 41521292B1; Mon, 1 Oct 2018 15:57:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 667AF28DBC for ; Mon, 1 Oct 2018 15:57:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=PWoGmhOE/+RvlZ4B46HH5/VyXUN/RqKeUW7IcvwHJl4=; b=nYFJzUvtBpQ0q+nZVtowzR8njd daGEtRzhc9v6a/3X0DVAn7ssY4GNJuYyAznaegy/nYSFeILsgHFez3pnkQ1VQmtL8NS/vprbv+FMy yt9Dj7ycpSKv/6fMS9kL3BqJwPv99cxnb9EDOCbzw51IXWpS9dQhbhMnWVDekFVKsUUhhKYTA6LHJ xmp/zZH2jvYXmP8bZOUNLAjg/v0rYSe4fYJ6xj702+NhdA7jp7ghG3b5uttq3YYmYb2IhvP9nD9d1 QtcC69YVQfUyIyLe64NA6m2p8x2dxC5dp9tpVeT/iyJwPiwWYbJQPAr3oUL6+yb+L76dr0Zy528el rzgyx8Mw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1g70ZS-00049A-Hh; Mon, 01 Oct 2018 15:57:18 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1g70Xd-0002ek-Jz for linux-arm-kernel@lists.infradead.org; Mon, 01 Oct 2018 15:55:48 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B0B751596; Mon, 1 Oct 2018 08:55:19 -0700 (PDT) Received: from localhost (e105922-lin.emea.arm.com [10.4.13.28]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2D7553F5D3; Mon, 1 Oct 2018 08:55:19 -0700 (PDT) From: Punit Agrawal To: kvmarm@lists.cs.columbia.edu Subject: [PATCH v8 2/9] KVM: arm/arm64: Share common code in user_mem_abort() Date: Mon, 1 Oct 2018 16:54:36 +0100 Message-Id: <20181001155443.23032-3-punit.agrawal@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20181001155443.23032-1-punit.agrawal@arm.com> References: <20181001155443.23032-1-punit.agrawal@arm.com> X-ARM-No-Footer: FoSSMail X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181001_085525_740385_AD3764BB X-CRM114-Status: GOOD ( 15.56 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: suzuki.poulose@arm.com, marc.zyngier@arm.com, Punit Agrawal , will.deacon@arm.com, linux-kernel@vger.kernel.org, Christoffer Dall , linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The code for operations such as marking the pfn as dirty, and dcache/icache maintenance during stage 2 fault handling is duplicated between normal pages and PMD hugepages. Instead of creating another copy of the operations when we introduce PUD hugepages, let's share them across the different pagesizes. Signed-off-by: Punit Agrawal Cc: Suzuki K Poulose Cc: Christoffer Dall Cc: Marc Zyngier Reviewed-by: Suzuki K Poulose --- virt/kvm/arm/mmu.c | 45 +++++++++++++++++++++++++++++---------------- 1 file changed, 29 insertions(+), 16 deletions(-) diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index c23a1b323aad..5b76ee204000 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1490,7 +1490,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, kvm_pfn_t pfn; pgprot_t mem_type = PAGE_S2; bool logging_active = memslot_is_logging(memslot); - unsigned long flags = 0; + unsigned long vma_pagesize, flags = 0; write_fault = kvm_is_write_fault(vcpu); exec_fault = kvm_vcpu_trap_is_iabt(vcpu); @@ -1510,10 +1510,17 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, return -EFAULT; } - if (vma_kernel_pagesize(vma) == PMD_SIZE && !logging_active) { + vma_pagesize = vma_kernel_pagesize(vma); + if (vma_pagesize == PMD_SIZE && !logging_active) { hugetlb = true; gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT; } else { + /* + * Fallback to PTE if it's not one of the Stage 2 + * supported hugepage sizes + */ + vma_pagesize = PAGE_SIZE; + /* * Pages belonging to memslots that don't have the same * alignment for userspace and IPA cannot be mapped using @@ -1579,23 +1586,34 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (mmu_notifier_retry(kvm, mmu_seq)) goto out_unlock; - if (!hugetlb && !force_pte) + if (!hugetlb && !force_pte) { + /* + * Only PMD_SIZE transparent hugepages(THP) are + * currently supported. This code will need to be + * updated to support other THP sizes. + */ hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa); + if (hugetlb) + vma_pagesize = PMD_SIZE; + } + + if (writable) + kvm_set_pfn_dirty(pfn); - if (hugetlb) { + if (fault_status != FSC_PERM) + clean_dcache_guest_page(pfn, vma_pagesize); + + if (exec_fault) + invalidate_icache_guest_page(pfn, vma_pagesize); + + if (hugetlb && vma_pagesize == PMD_SIZE) { pmd_t new_pmd = pfn_pmd(pfn, mem_type); new_pmd = pmd_mkhuge(new_pmd); - if (writable) { + if (writable) new_pmd = kvm_s2pmd_mkwrite(new_pmd); - kvm_set_pfn_dirty(pfn); - } - - if (fault_status != FSC_PERM) - clean_dcache_guest_page(pfn, PMD_SIZE); if (exec_fault) { new_pmd = kvm_s2pmd_mkexec(new_pmd); - invalidate_icache_guest_page(pfn, PMD_SIZE); } else if (fault_status == FSC_PERM) { /* Preserve execute if XN was already cleared */ if (stage2_is_exec(kvm, fault_ipa)) @@ -1608,16 +1626,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (writable) { new_pte = kvm_s2pte_mkwrite(new_pte); - kvm_set_pfn_dirty(pfn); mark_page_dirty(kvm, gfn); } - if (fault_status != FSC_PERM) - clean_dcache_guest_page(pfn, PAGE_SIZE); - if (exec_fault) { new_pte = kvm_s2pte_mkexec(new_pte); - invalidate_icache_guest_page(pfn, PAGE_SIZE); } else if (fault_status == FSC_PERM) { /* Preserve execute if XN was already cleared */ if (stage2_is_exec(kvm, fault_ipa))