From patchwork Wed Oct 17 16:34:56 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10645913 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9E0CF109C for ; Wed, 17 Oct 2018 16:36:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8A7A71FF29 for ; Wed, 17 Oct 2018 16:36:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7B1A6283AF; Wed, 17 Oct 2018 16:36:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1DBBC1FF29 for ; Wed, 17 Oct 2018 16:36:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=YjORuraiYIA4PlACVrVDfhPT/1VPptM4Fjdlp5gem1Q=; b=Hd1Uo2qDVehCiAHFmbXlRT8MtI kOmLynLq1A33mrQ2+jrbIce9dbyqMnYNX9X+CR3JXzcETS33kCFWaHiB8qg4lZ4xsu3Tvvvo54BIN bsjyIIKGHKcV98bLKh0kbtBFCk6Y7flT6AELA97fzDZa9h9cdfq3xugp6bBCEjiniE4OTZR/asntl wPbwyKsYBRK9E5vemsi2oS2p0G9/+EPsPXZGz++cJwodGRJwYXByazCenJcfLdENL3otfNO0jIpWl J47jeFZmzwzJc3teeEq2hPsx0hD1mHiimvEuakwqpC97WMpktRMBiV+OgniyEBfSlHZzebA6kj8QI b6ijlNSQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gCoo1-0000fN-Pd; Wed, 17 Oct 2018 16:36:21 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gConD-0000HZ-GZ for linux-arm-kernel@lists.infradead.org; Wed, 17 Oct 2018 16:35:33 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A4839EBD; Wed, 17 Oct 2018 09:35:23 -0700 (PDT) Received: from capper-debian.emea.arm.com (unknown [10.32.102.177]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 674D13F71A; Wed, 17 Oct 2018 09:35:22 -0700 (PDT) From: Steve Capper To: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH V2 1/4] mm: mmap: Allow for "high" userspace addresses Date: Wed, 17 Oct 2018 17:34:56 +0100 Message-Id: <20181017163459.20175-2-steve.capper@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20181017163459.20175-1-steve.capper@arm.com> References: <20181017163459.20175-1-steve.capper@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181017_093531_590496_7AA9D4DD X-CRM114-Status: GOOD ( 15.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: catalin.marinas@arm.com, Steve Capper , will.deacon@arm.com, jcm@redhat.com, ard.biesheuvel@linaro.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch adds support for "high" userspace addresses that are optionally supported on the system and have to be requested via a hint mechanism ("high" addr parameter to mmap). Rather than duplicate the arch_get_unmapped_* stock implementations, this patch instead introduces two architectural helper macros and applies them to arch_get_unmapped_*: arch_get_mmap_end(addr) - get mmap upper limit depending on addr hint arch_get_mmap_base(addr, base) - get mmap_base depending on addr hint If these macros are not defined in architectural code then they default to (TASK_SIZE) and (base) so should not introduce any behavioural changes to architectures that do not define them. Signed-off-by: Steve Capper --- Changed in V2, these alterations are made to mm/mmap.c rather than copied over to arch/arm64 and modified there. --- mm/mmap.c | 25 ++++++++++++++++++------- 1 file changed, 18 insertions(+), 7 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index 5f2b2b184c60..396b8ae12783 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2042,6 +2042,15 @@ unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info) return gap_end; } + +#ifndef arch_get_mmap_end +#define arch_get_mmap_end(addr) (TASK_SIZE) +#endif + +#ifndef arch_get_mmap_base +#define arch_get_mmap_base(addr, base) (base) +#endif + /* Get an address range which is currently unmapped. * For shmat() with addr=0. * @@ -2061,8 +2070,9 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, struct mm_struct *mm = current->mm; struct vm_area_struct *vma, *prev; struct vm_unmapped_area_info info; + const unsigned long mmap_end = arch_get_mmap_end(addr); - if (len > TASK_SIZE - mmap_min_addr) + if (len > mmap_end - mmap_min_addr) return -ENOMEM; if (flags & MAP_FIXED) @@ -2071,7 +2081,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, if (addr) { addr = PAGE_ALIGN(addr); vma = find_vma_prev(mm, addr, &prev); - if (TASK_SIZE - len >= addr && addr >= mmap_min_addr && + if (mmap_end - len >= addr && addr >= mmap_min_addr && (!vma || addr + len <= vm_start_gap(vma)) && (!prev || addr >= vm_end_gap(prev))) return addr; @@ -2080,7 +2090,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, info.flags = 0; info.length = len; info.low_limit = mm->mmap_base; - info.high_limit = TASK_SIZE; + info.high_limit = mmap_end; info.align_mask = 0; return vm_unmapped_area(&info); } @@ -2100,9 +2110,10 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, struct mm_struct *mm = current->mm; unsigned long addr = addr0; struct vm_unmapped_area_info info; + const unsigned long mmap_end = arch_get_mmap_end(addr); /* requested length too big for entire address space */ - if (len > TASK_SIZE - mmap_min_addr) + if (len > mmap_end - mmap_min_addr) return -ENOMEM; if (flags & MAP_FIXED) @@ -2112,7 +2123,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, if (addr) { addr = PAGE_ALIGN(addr); vma = find_vma_prev(mm, addr, &prev); - if (TASK_SIZE - len >= addr && addr >= mmap_min_addr && + if (mmap_end - len >= addr && addr >= mmap_min_addr && (!vma || addr + len <= vm_start_gap(vma)) && (!prev || addr >= vm_end_gap(prev))) return addr; @@ -2121,7 +2132,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, info.flags = VM_UNMAPPED_AREA_TOPDOWN; info.length = len; info.low_limit = max(PAGE_SIZE, mmap_min_addr); - info.high_limit = mm->mmap_base; + info.high_limit = arch_get_mmap_base(addr, mm->mmap_base); info.align_mask = 0; addr = vm_unmapped_area(&info); @@ -2135,7 +2146,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, VM_BUG_ON(addr != -ENOMEM); info.flags = 0; info.low_limit = TASK_UNMAPPED_BASE; - info.high_limit = TASK_SIZE; + info.high_limit = mmap_end; addr = vm_unmapped_area(&info); }