From patchwork Wed Aug 29 12:45:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10580297 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D993514BD for ; Wed, 29 Aug 2018 12:49:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C842A2B005 for ; Wed, 29 Aug 2018 12:49:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BBEB32B00B; Wed, 29 Aug 2018 12:49:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2DA082B005 for ; Wed, 29 Aug 2018 12:49:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=omkkNWrRshrBq2WNZbYe5W5u9FP5qGyNvSKvCqHdr9c=; b=l4M66L/b1QPwIQ3mzcL5vQWNhV RcMA7HHVbuijFc5YsafnjPoG9gkacQWTpyz2LlHzi1Y1fGkqaspftl/LcXp7t3x40I2wvuzNyxUuP 81sgfGnvasxzT2RXt0WDDrffvUFXpHxlJV/wzgv1UdtgF16jucfsIWcdmtNixXphPoaM5Fe3KPHDU Yk9eHf2JeHSlMshhjLsZ0EjDd/82MnU16VTibOyZiz0aPUVGIVsDsrwBGBksa71CsGfPHuqiYD3yQ cxuxT3ZObIbA0o5PfGD4Ed6ntmfrRNQr1ZOL3G4UQMh6GkbYdnwFyf+pWintTbE4vcD8aYi77LQx5 dMi4N8uw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fuzu4-0006MW-Gg; Wed, 29 Aug 2018 12:48:56 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fuzrG-00056j-0Y for linux-arm-kernel@lists.infradead.org; Wed, 29 Aug 2018 12:46:09 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2FE651682; Wed, 29 Aug 2018 05:45:54 -0700 (PDT) Received: from capper-debian.arm.com (unknown [10.37.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1E0A83F5BC; Wed, 29 Aug 2018 05:45:52 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 3/5] arm64: mm: Copy over arch_get_unmapped_area Date: Wed, 29 Aug 2018 13:45:41 +0100 Message-Id: <20180829124543.25314-4-steve.capper@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180829124543.25314-1-steve.capper@arm.com> References: <20180829124543.25314-1-steve.capper@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180829_054602_077869_AB83558E X-CRM114-Status: GOOD ( 16.28 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: catalin.marinas@arm.com, will.deacon@arm.com, ard.biesheuvel@linaro.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP In order to support 52-bit VAs for userspace we need to alter the mmap area choosing logic to give 52-bit VAs where "high" addresses are requested. This patch copies over the arch_get_unmapped_area and arch_get_unmapped_area_topdown routines from common code such that we can make modifications to the logic in a future patch. Signed-off-by: Steve Capper --- arch/arm64/include/asm/pgtable.h | 7 ++++ arch/arm64/mm/mmap.c | 84 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 91 insertions(+) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 8449e266cd46..8d4175cde295 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -785,6 +785,13 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, #define phys_to_ttbr(addr) (addr) #endif +/* + * On arm64 we can have larger VA spaces for userspace, we define our own + * arch_get_unmapped_area_ routines to allow for hinting from userspace. + */ +#define HAVE_ARCH_UNMAPPED_AREA +#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN + #endif /* !__ASSEMBLY__ */ #endif /* __ASM_PGTABLE_H */ diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c index 842c8a5fcd53..b516e0bfdb71 100644 --- a/arch/arm64/mm/mmap.c +++ b/arch/arm64/mm/mmap.c @@ -79,6 +79,90 @@ static unsigned long mmap_base(unsigned long rnd, struct rlimit *rlim_stack) return PAGE_ALIGN(STACK_TOP - gap - rnd); } +extern unsigned long mmap_min_addr; + +unsigned long +arch_get_unmapped_area(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, unsigned long flags) +{ + struct mm_struct *mm = current->mm; + struct vm_area_struct *vma, *prev; + struct vm_unmapped_area_info info; + + if (len > TASK_SIZE - mmap_min_addr) + return -ENOMEM; + + if (flags & MAP_FIXED) + return addr; + + if (addr) { + addr = PAGE_ALIGN(addr); + vma = find_vma_prev(mm, addr, &prev); + if (TASK_SIZE - len >= addr && addr >= mmap_min_addr && + (!vma || addr + len <= vm_start_gap(vma)) && + (!prev || addr >= vm_end_gap(prev))) + return addr; + } + + info.flags = 0; + info.length = len; + info.low_limit = mm->mmap_base; + info.high_limit = TASK_SIZE; + info.align_mask = 0; + return vm_unmapped_area(&info); +} + +unsigned long +arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, + const unsigned long len, const unsigned long pgoff, + const unsigned long flags) +{ + struct vm_area_struct *vma, *prev; + struct mm_struct *mm = current->mm; + unsigned long addr = addr0; + struct vm_unmapped_area_info info; + + /* requested length too big for entire address space */ + if (len > TASK_SIZE - mmap_min_addr) + return -ENOMEM; + + if (flags & MAP_FIXED) + return addr; + + /* requesting a specific address */ + if (addr) { + addr = PAGE_ALIGN(addr); + vma = find_vma_prev(mm, addr, &prev); + if (TASK_SIZE - len >= addr && addr >= mmap_min_addr && + (!vma || addr + len <= vm_start_gap(vma)) && + (!prev || addr >= vm_end_gap(prev))) + return addr; + } + + info.flags = VM_UNMAPPED_AREA_TOPDOWN; + info.length = len; + info.low_limit = max(PAGE_SIZE, mmap_min_addr); + info.high_limit = mm->mmap_base; + info.align_mask = 0; + addr = vm_unmapped_area(&info); + + /* + * A failed mmap() very likely causes application failure, + * so fall back to the bottom-up function here. This scenario + * can happen with large stack limits and large mmap() + * allocations. + */ + if (offset_in_page(addr)) { + VM_BUG_ON(addr != -ENOMEM); + info.flags = 0; + info.low_limit = TASK_UNMAPPED_BASE; + info.high_limit = TASK_SIZE; + addr = vm_unmapped_area(&info); + } + + return addr; +} + /* * This function, called very early during the creation of a new process VM * image, sets up which VM layout function to use: