From patchwork Tue Oct 27 17:26:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 11861175 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12E7FC4363A for ; Tue, 27 Oct 2020 17:30:17 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 79A8620657 for ; Tue, 27 Oct 2020 17:30:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="ILxQAtfL" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 79A8620657 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=s4oo2MRoAp9QmE4nxPp0boa3Vg+3+6+kJ/41QkWZRck=; b=ILxQAtfLC5fMCozvPEh/kubAp XV398BNLKP0vAwJFsCG7Gb5f9WTLcqnZ/64gl437fb43oTkxMltQYH4U/BiNG6fHEO7i2c1KxPfHb TIN6qaHVWMTcZm55+/c6pFdNWyRzLmCzIb2bPan7IiJYYTCKAyExlyw8oscFULMfsWhxXnfFX/e70 4Z8s9Lxrmat4WSxdhrcDnygki/S57NtL7RPLTsrn9VKztZJtT8WcbQUT1pDl48CE8wMjPIspJ201W b/XSE87ENr585KkI2HJfnldTE8eYKOBJfU45ujRwEG2i49ByIcz//cwqF6M0UdcogAk+H3ch4lydY 1WibYdosA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kXSl8-0005HE-V3; Tue, 27 Oct 2020 17:27:47 +0000 Received: from foss.arm.com ([217.140.110.172]) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kXSjr-0004qJ-8z for linux-arm-kernel@lists.infradead.org; Tue, 27 Oct 2020 17:26:28 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8A4B2139F; Tue, 27 Oct 2020 10:26:26 -0700 (PDT) Received: from monolith.localdoman (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7FE1C3F719; Tue, 27 Oct 2020 10:26:25 -0700 (PDT) From: Alexandru Elisei To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Subject: [RFC PATCH v3 09/16] KVM: arm64: Use separate function for the mapping size in user_mem_abort() Date: Tue, 27 Oct 2020 17:26:58 +0000 Message-Id: <20201027172705.15181-10-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.29.1 In-Reply-To: <20201027172705.15181-1-alexandru.elisei@arm.com> References: <20201027172705.15181-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201027_132627_434804_A2EE77F7 X-CRM114-Status: GOOD ( 14.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: maz@kernel.org, will@kernel.org, james.morse@arm.com, julien.thierry.kdev@gmail.com, suzuki.poulose@arm.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org user_mem_abort() is already a long and complex function, let's make it slightly easier to understand by abstracting the algorithm for choosing the stage 2 IPA entry size into its own function. This also makes it possible to reuse the code when guest SPE support will be added. Signed-off-by: Alexandru Elisei --- arch/arm64/kvm/mmu.c | 55 ++++++++++++++++++++++++++------------------ 1 file changed, 33 insertions(+), 22 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 19aacc7d64de..c3c43555490d 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -738,12 +738,43 @@ transparent_hugepage_adjust(struct kvm_memory_slot *memslot, return PAGE_SIZE; } +static short stage2_max_pageshift(struct kvm_memory_slot *memslot, + struct vm_area_struct *vma, hva_t hva, + bool *force_pte) +{ + short pageshift; + + *force_pte = false; + + if (is_vm_hugetlb_page(vma)) + pageshift = huge_page_shift(hstate_vma(vma)); + else + pageshift = PAGE_SHIFT; + + if (memslot_is_logging(memslot) || (vma->vm_flags & VM_PFNMAP)) { + *force_pte = true; + pageshift = PAGE_SHIFT; + } + + if (pageshift == PUD_SHIFT && + !fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE)) + pageshift = PMD_SHIFT; + + if (pageshift == PMD_SHIFT && + !fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) { + *force_pte = true; + pageshift = PAGE_SHIFT; + } + + return pageshift; +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_memory_slot *memslot, unsigned long hva, unsigned long fault_status) { int ret = 0; - bool write_fault, writable, force_pte = false; + bool write_fault, writable, force_pte; bool exec_fault; bool device = false; unsigned long mmu_seq; @@ -776,27 +807,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, return -EFAULT; } - if (is_vm_hugetlb_page(vma)) - vma_shift = huge_page_shift(hstate_vma(vma)); - else - vma_shift = PAGE_SHIFT; - - if (logging_active || - (vma->vm_flags & VM_PFNMAP)) { - force_pte = true; - vma_shift = PAGE_SHIFT; - } - - if (vma_shift == PUD_SHIFT && - !fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE)) - vma_shift = PMD_SHIFT; - - if (vma_shift == PMD_SHIFT && - !fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) { - force_pte = true; - vma_shift = PAGE_SHIFT; - } - + vma_shift = stage2_max_pageshift(memslot, vma, hva, &force_pte); vma_pagesize = 1UL << vma_shift; if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE) fault_ipa &= ~(vma_pagesize - 1);