From patchwork Mon Nov 22 08:48:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Leroy X-Patchwork-Id: 12631437 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81DC0C433F5 for ; Mon, 22 Nov 2021 08:50:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4BF416B0073; Mon, 22 Nov 2021 03:49:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 46F2F6B0075; Mon, 22 Nov 2021 03:49:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 336F06B0078; Mon, 22 Nov 2021 03:49:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0117.hostedemail.com [216.40.44.117]) by kanga.kvack.org (Postfix) with ESMTP id 270806B0073 for ; Mon, 22 Nov 2021 03:49:10 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id DF5258BEF6 for ; Mon, 22 Nov 2021 08:48:59 +0000 (UTC) X-FDA: 78835941198.06.7DCD3EC Received: from pegase2.c-s.fr (pegase2.c-s.fr [93.17.235.10]) by imf08.hostedemail.com (Postfix) with ESMTP id F047930000BD for ; Mon, 22 Nov 2021 08:48:55 +0000 (UTC) Received: from localhost (mailhub3.si.c-s.fr [172.26.127.67]) by localhost (Postfix) with ESMTP id 4HyLXg3hN6z9sSm; Mon, 22 Nov 2021 09:48:51 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase2.c-s.fr ([172.26.127.65]) by localhost (pegase2.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 9SCaD7-MuqQX; Mon, 22 Nov 2021 09:48:51 +0100 (CET) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase2.c-s.fr (Postfix) with ESMTP id 4HyLXc0mjvz9sSk; Mon, 22 Nov 2021 09:48:48 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 049578B786; Mon, 22 Nov 2021 09:48:48 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id E6F8GvhVD25J; Mon, 22 Nov 2021 09:48:47 +0100 (CET) Received: from PO20335.IDSI0.si.c-s.fr (unknown [172.25.230.108]) by messagerie.si.c-s.fr (Postfix) with ESMTP id A9AD78B77B; Mon, 22 Nov 2021 09:48:47 +0100 (CET) Received: from PO20335.IDSI0.si.c-s.fr (localhost [127.0.0.1]) by PO20335.IDSI0.si.c-s.fr (8.17.1/8.16.1) with ESMTPS id 1AM8me6a631728 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NOT); Mon, 22 Nov 2021 09:48:40 +0100 Received: (from chleroy@localhost) by PO20335.IDSI0.si.c-s.fr (8.17.1/8.17.1/Submit) id 1AM8meCI631727; Mon, 22 Nov 2021 09:48:40 +0100 X-Authentication-Warning: PO20335.IDSI0.si.c-s.fr: chleroy set sender to christophe.leroy@csgroup.eu using -f From: Christophe Leroy To: Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , alex@ghiti.fr Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org Subject: [PATCH 5/8] powerpc/mm: Call radix__arch_get_unmapped_area() from arch_get_unmapped_area() Date: Mon, 22 Nov 2021 09:48:25 +0100 Message-Id: <51b1ce7eb29ccf2f9a6cf305735260cfea16586a.1637570556.git.christophe.leroy@csgroup.eu> X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1637570906; l=8800; s=20211009; h=from:subject:message-id; bh=D0jWa1ztmRr006NR+Rp6VOpy7HcfFJKItmQf1LiwqK0=; b=U4dmmyNwBSoxTo6KGtmLaoDgiPS4sGVz1aaWBK0IlBkH9ythU/i76S5dfvIDWtetcNpKFGmFWB3k di9USD0FAfWY2CBt6+ttUuEp7aXrrnY6lw6Apxg57BnNR/AQxdHU X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: F047930000BD X-Stat-Signature: j6wmzzept6pjqsw8616k3ewuairydus1 Authentication-Results: imf08.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf08.hostedemail.com: domain of christophe.leroy@csgroup.eu designates 93.17.235.10 as permitted sender) smtp.mailfrom=christophe.leroy@csgroup.eu X-HE-Tag: 1637570935-913569 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Instead of setting mm->get_unmapped_area() to either arch_get_unmapped_area() or radix__arch_get_unmapped_area(), always set it to arch_get_unmapped_area() and call radix__arch_get_unmapped_area() from there when radix is enabled. To keep radix__arch_get_unmapped_area() static, move it to slice.c Do the same with radix__arch_get_unmapped_area_topdown() Signed-off-by: Christophe Leroy --- arch/powerpc/mm/book3s64/slice.c | 104 ++++++++++++++++++++++++++ arch/powerpc/mm/mmap.c | 123 ------------------------------- 2 files changed, 104 insertions(+), 123 deletions(-) diff --git a/arch/powerpc/mm/book3s64/slice.c b/arch/powerpc/mm/book3s64/slice.c index 62848c5fa2d6..8327a43d29cb 100644 --- a/arch/powerpc/mm/book3s64/slice.c +++ b/arch/powerpc/mm/book3s64/slice.c @@ -639,12 +639,113 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, } EXPORT_SYMBOL_GPL(slice_get_unmapped_area); +/* + * Same function as generic code used only for radix, because we don't need to overload + * the generic one. But we will have to duplicate, because hash select + * HAVE_ARCH_UNMAPPED_AREA + */ +static unsigned long +radix__arch_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, + unsigned long pgoff, unsigned long flags) +{ + struct mm_struct *mm = current->mm; + struct vm_area_struct *vma; + int fixed = (flags & MAP_FIXED); + unsigned long high_limit; + struct vm_unmapped_area_info info; + + high_limit = DEFAULT_MAP_WINDOW; + if (addr >= high_limit || (fixed && (addr + len > high_limit))) + high_limit = TASK_SIZE; + + if (len > high_limit) + return -ENOMEM; + + if (fixed) { + if (addr > high_limit - len) + return -ENOMEM; + return addr; + } + + if (addr) { + addr = PAGE_ALIGN(addr); + vma = find_vma(mm, addr); + if (high_limit - len >= addr && addr >= mmap_min_addr && + (!vma || addr + len <= vm_start_gap(vma))) + return addr; + } + + info.flags = 0; + info.length = len; + info.low_limit = mm->mmap_base; + info.high_limit = high_limit; + info.align_mask = 0; + + return vm_unmapped_area(&info); +} + +static unsigned long +radix__arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, + const unsigned long len, const unsigned long pgoff, + const unsigned long flags) +{ + struct vm_area_struct *vma; + struct mm_struct *mm = current->mm; + unsigned long addr = addr0; + int fixed = (flags & MAP_FIXED); + unsigned long high_limit; + struct vm_unmapped_area_info info; + + high_limit = DEFAULT_MAP_WINDOW; + if (addr >= high_limit || (fixed && (addr + len > high_limit))) + high_limit = TASK_SIZE; + + if (len > high_limit) + return -ENOMEM; + + if (fixed) { + if (addr > high_limit - len) + return -ENOMEM; + return addr; + } + + if (addr) { + addr = PAGE_ALIGN(addr); + vma = find_vma(mm, addr); + if (high_limit - len >= addr && addr >= mmap_min_addr && + (!vma || addr + len <= vm_start_gap(vma))) + return addr; + } + + info.flags = VM_UNMAPPED_AREA_TOPDOWN; + info.length = len; + info.low_limit = max(PAGE_SIZE, mmap_min_addr); + info.high_limit = mm->mmap_base + (high_limit - DEFAULT_MAP_WINDOW); + info.align_mask = 0; + + addr = vm_unmapped_area(&info); + if (!(addr & ~PAGE_MASK)) + return addr; + VM_BUG_ON(addr != -ENOMEM); + + /* + * A failed mmap() very likely causes application failure, + * so fall back to the bottom-up function here. This scenario + * can happen with large stack limits and large mmap() + * allocations. + */ + return radix__arch_get_unmapped_area(filp, addr0, len, pgoff, flags); +} + unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) { + if (radix_enabled()) + return radix__arch_get_unmapped_area(filp, addr, len, pgoff, flags); + return slice_get_unmapped_area(addr, len, flags, mm_ctx_user_psize(¤t->mm->context), 0); } @@ -655,6 +756,9 @@ unsigned long arch_get_unmapped_area_topdown(struct file *filp, const unsigned long pgoff, const unsigned long flags) { + if (radix_enabled()) + return radix__arch_get_unmapped_area_topdown(filp, addr0, len, pgoff, flags); + return slice_get_unmapped_area(addr0, len, flags, mm_ctx_user_psize(¤t->mm->context), 1); } diff --git a/arch/powerpc/mm/mmap.c b/arch/powerpc/mm/mmap.c index ae683fdc716c..5972d619d274 100644 --- a/arch/powerpc/mm/mmap.c +++ b/arch/powerpc/mm/mmap.c @@ -80,126 +80,6 @@ static inline unsigned long mmap_base(unsigned long rnd, return PAGE_ALIGN(DEFAULT_MAP_WINDOW - gap - rnd); } -#ifdef CONFIG_PPC_RADIX_MMU -/* - * Same function as generic code used only for radix, because we don't need to overload - * the generic one. But we will have to duplicate, because hash select - * HAVE_ARCH_UNMAPPED_AREA - */ -static unsigned long -radix__arch_get_unmapped_area(struct file *filp, unsigned long addr, - unsigned long len, unsigned long pgoff, - unsigned long flags) -{ - struct mm_struct *mm = current->mm; - struct vm_area_struct *vma; - int fixed = (flags & MAP_FIXED); - unsigned long high_limit; - struct vm_unmapped_area_info info; - - high_limit = DEFAULT_MAP_WINDOW; - if (addr >= high_limit || (fixed && (addr + len > high_limit))) - high_limit = TASK_SIZE; - - if (len > high_limit) - return -ENOMEM; - - if (fixed) { - if (addr > high_limit - len) - return -ENOMEM; - return addr; - } - - if (addr) { - addr = PAGE_ALIGN(addr); - vma = find_vma(mm, addr); - if (high_limit - len >= addr && addr >= mmap_min_addr && - (!vma || addr + len <= vm_start_gap(vma))) - return addr; - } - - info.flags = 0; - info.length = len; - info.low_limit = mm->mmap_base; - info.high_limit = high_limit; - info.align_mask = 0; - - return vm_unmapped_area(&info); -} - -static unsigned long -radix__arch_get_unmapped_area_topdown(struct file *filp, - const unsigned long addr0, - const unsigned long len, - const unsigned long pgoff, - const unsigned long flags) -{ - struct vm_area_struct *vma; - struct mm_struct *mm = current->mm; - unsigned long addr = addr0; - int fixed = (flags & MAP_FIXED); - unsigned long high_limit; - struct vm_unmapped_area_info info; - - high_limit = DEFAULT_MAP_WINDOW; - if (addr >= high_limit || (fixed && (addr + len > high_limit))) - high_limit = TASK_SIZE; - - if (len > high_limit) - return -ENOMEM; - - if (fixed) { - if (addr > high_limit - len) - return -ENOMEM; - return addr; - } - - if (addr) { - addr = PAGE_ALIGN(addr); - vma = find_vma(mm, addr); - if (high_limit - len >= addr && addr >= mmap_min_addr && - (!vma || addr + len <= vm_start_gap(vma))) - return addr; - } - - info.flags = VM_UNMAPPED_AREA_TOPDOWN; - info.length = len; - info.low_limit = max(PAGE_SIZE, mmap_min_addr); - info.high_limit = mm->mmap_base + (high_limit - DEFAULT_MAP_WINDOW); - info.align_mask = 0; - - addr = vm_unmapped_area(&info); - if (!(addr & ~PAGE_MASK)) - return addr; - VM_BUG_ON(addr != -ENOMEM); - - /* - * A failed mmap() very likely causes application failure, - * so fall back to the bottom-up function here. This scenario - * can happen with large stack limits and large mmap() - * allocations. - */ - return radix__arch_get_unmapped_area(filp, addr0, len, pgoff, flags); -} - -static void radix__arch_pick_mmap_layout(struct mm_struct *mm, - unsigned long random_factor, - struct rlimit *rlim_stack) -{ - if (mmap_is_legacy(rlim_stack)) { - mm->mmap_base = TASK_UNMAPPED_BASE; - mm->get_unmapped_area = radix__arch_get_unmapped_area; - } else { - mm->mmap_base = mmap_base(random_factor, rlim_stack); - mm->get_unmapped_area = radix__arch_get_unmapped_area_topdown; - } -} -#else -/* dummy */ -extern void radix__arch_pick_mmap_layout(struct mm_struct *mm, - unsigned long random_factor, - struct rlimit *rlim_stack); -#endif /* * This function, called very early during the creation of a new * process VM image, sets up which VM layout function to use: @@ -211,9 +91,6 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack) if (current->flags & PF_RANDOMIZE) random_factor = arch_mmap_rnd(); - if (radix_enabled()) - return radix__arch_pick_mmap_layout(mm, random_factor, - rlim_stack); /* * Fall back to the standard layout if the personality * bit is set, or if the expected stack growth is unlimited: