From patchwork Fri Sep 14 12:10:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 10600679 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 37FF7112B for ; Fri, 14 Sep 2018 12:13:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 121502B2F9 for ; Fri, 14 Sep 2018 12:13:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 05A6D2B31C; Fri, 14 Sep 2018 12:13:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4C1072B2F9 for ; Fri, 14 Sep 2018 12:13:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 872818E0014; Fri, 14 Sep 2018 08:12:59 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7FB728E0001; Fri, 14 Sep 2018 08:12:59 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6C3438E0014; Fri, 14 Sep 2018 08:12:59 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ot1-f70.google.com (mail-ot1-f70.google.com [209.85.210.70]) by kanga.kvack.org (Postfix) with ESMTP id 39B3E8E0001 for ; Fri, 14 Sep 2018 08:12:59 -0400 (EDT) Received: by mail-ot1-f70.google.com with SMTP id r10-v6so3538274oti.19 for ; Fri, 14 Sep 2018 05:12:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:in-reply-to:references:message-id; bh=Sffb18r99Md9D36ccNYjqmr5YYVp2QZvwrIdq85yn/g=; b=PoBgbSjJpCZwHdlTGLKfdvn4Sa8co6McMEyRBTiRya8oFKmXraCZ9k0bDd+HS2BvVa sx5AHgqpaTFTqoOvIaYGR94hMHn1JATT8gx4OPXE2aUyXjFSnNJittbenTLa1F1QfgXa z0BaO51Sddh/1MU2LStXLsHBrckCERwLYQCSveOZW/7P7gLf3wKup7psAJJ+2TJtCKzC a7vH8qNAlhlcQaitunL4Lkm7yiq89V6Ix9RhN3C6AibgvG+3/LvbsmhzyrejRvc+6cIn 7kwZyXgTKlhuR0g/EC8wX30ESeVSVivAW0FhKhvrNtOdz6wX2arKIY5mwZq6Cy2nCTXs ArOg== X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 148.163.158.5 is neither permitted nor denied by best guess record for domain of rppt@linux.vnet.ibm.com) smtp.mailfrom=rppt@linux.vnet.ibm.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com X-Gm-Message-State: APzg51Bgklby/T3S32C+VNeIuneWEJsuC9ho4+ElfbISd4XPSIeg7afa hRVcWYyEWl6TZwzPMR3GQvno0pG8/hN9kr+8lUH7VXhdSpqRIzvjimICmYX5OD2pw0x8mD/aJcG /T0BqOKjf/8s74z6E25Eik8IcBOANuiCStiEtdexTjfIn786Hp+6Ym+LRraHxCNw= X-Received: by 2002:a9d:ff5:: with SMTP id m50-v6mr4214410otd.276.1536927178946; Fri, 14 Sep 2018 05:12:58 -0700 (PDT) X-Google-Smtp-Source: ANB0VdYmVDZrQ1oUzkbNnLMmokFqIejqh+1Fl0qNWY5NSTO2daTAfMEHexsXhg/U1GQP2udfsLoT X-Received: by 2002:a9d:ff5:: with SMTP id m50-v6mr4214358otd.276.1536927177778; Fri, 14 Sep 2018 05:12:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536927177; cv=none; d=google.com; s=arc-20160816; b=TlNE/3KKZuyElvZKvazJ55nuyuyl7GGvv/hXCY7bUwzmuThIUvOj075LYYhf8iR0yb um2pMMVTe9lskdmfaU/5QKoHNr+Buremj5K3ZfWJlE/P7PFQYr2oWqSt+bYn40R8c68H jr78y+Bix/stFJz3YTltQtfWzJreYh+DkapR1DDx6ZfGOKmceojPjIeI41o3YgAQ7rK6 UmbLj8eVioGGr6UMJyHJe7W8xKRoGSpTs1fTEkmbayAEoZwr3zhF2fDyz4L60CtsWWvt xBaUyunC7gM1d3s0XDVAbxzqbcOIMYyto05KuBChRFnIu4gNdaOLX2tOMpXtY5DEysCp nLyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:references:in-reply-to:date:subject:cc:to:from; bh=Sffb18r99Md9D36ccNYjqmr5YYVp2QZvwrIdq85yn/g=; b=PPDv+OHmMZSKcDUbAOkyYESYYzlkDpqTtKIxWZFJVs6H7RWAXUFrEyAbhz8e4K7UN6 qteNH/Vfo/eNqdEZuAeEVI3jAk+uxxoqYkVR08BnTmeODAldBweze9MKwTygw5/8B+ww 64MmyLrJkQhUzjqh4UhTldBzjWk2TtZyCVrFy1plgFS3oK9w1M2TRj91CWYq/QVU49wE OH/gTsfPiK9WU1KXn9KLY4Mlc0DY64hvCDTa+VRke7HeUpnua2E9FM6y2Lnd1fgELLyW eMs9L5VSQAy2hnJvhB6Y5K//WxM14ApCUc9CwluIqBiB2oAKsJyT1jHLU5h+GXdkFORX x8eg== ARC-Authentication-Results: i=1; mx.google.com; spf=neutral (google.com: 148.163.158.5 is neither permitted nor denied by best guess record for domain of rppt@linux.vnet.ibm.com) smtp.mailfrom=rppt@linux.vnet.ibm.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com. [148.163.158.5]) by mx.google.com with ESMTPS id m24-v6si983953otf.149.2018.09.14.05.12.57 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 14 Sep 2018 05:12:57 -0700 (PDT) Received-SPF: neutral (google.com: 148.163.158.5 is neither permitted nor denied by best guess record for domain of rppt@linux.vnet.ibm.com) client-ip=148.163.158.5; Authentication-Results: mx.google.com; spf=neutral (google.com: 148.163.158.5 is neither permitted nor denied by best guess record for domain of rppt@linux.vnet.ibm.com) smtp.mailfrom=rppt@linux.vnet.ibm.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w8EC5BNA010130 for ; Fri, 14 Sep 2018 08:12:57 -0400 Received: from e06smtp03.uk.ibm.com (e06smtp03.uk.ibm.com [195.75.94.99]) by mx0b-001b2d01.pphosted.com with ESMTP id 2mgamuw7sx-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Fri, 14 Sep 2018 08:12:56 -0400 Received: from localhost by e06smtp03.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 14 Sep 2018 13:12:54 +0100 Received: from b06cxnps4075.portsmouth.uk.ibm.com (9.149.109.197) by e06smtp03.uk.ibm.com (192.168.101.133) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Fri, 14 Sep 2018 13:12:43 +0100 Received: from d06av25.portsmouth.uk.ibm.com (d06av25.portsmouth.uk.ibm.com [9.149.105.61]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w8ECCgLX52625452 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Fri, 14 Sep 2018 12:12:42 GMT Received: from d06av25.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 844A211C05C; Fri, 14 Sep 2018 15:12:29 +0100 (BST) Received: from d06av25.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6467C11C04C; Fri, 14 Sep 2018 15:12:24 +0100 (BST) Received: from rapoport-lnx (unknown [9.148.207.116]) by d06av25.portsmouth.uk.ibm.com (Postfix) with ESMTPS; Fri, 14 Sep 2018 15:12:24 +0100 (BST) Received: by rapoport-lnx (sSMTP sendmail emulation); Fri, 14 Sep 2018 15:12:35 +0300 From: Mike Rapoport To: linux-mm@kvack.org Cc: Andrew Morton , Catalin Marinas , Chris Zankel , "David S. Miller" , Geert Uytterhoeven , Greentime Hu , Greg Kroah-Hartman , Guan Xuetao , Ingo Molnar , "James E.J. Bottomley" , Jonas Bonn , Jonathan Corbet , Ley Foon Tan , Mark Salter , Martin Schwidefsky , Matt Turner , Michael Ellerman , Michal Hocko , Michal Simek , Palmer Dabbelt , Paul Burton , Richard Kuo , Richard Weinberger , Rich Felker , Russell King , Serge Semin , Thomas Gleixner , Tony Luck , Vineet Gupta , Yoshinori Sato , linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-c6x-dev@linux-c6x.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@linux-mips.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-um@lists.infradead.org, nios2-dev@lists.rocketboards.org, openrisc@lists.librecores.org, sparclinux@vger.kernel.org, uclinux-h8-devel@lists.sourceforge.jp, Mike Rapoport Subject: [PATCH 18/30] memblock: replace alloc_bootmem_low_pages with memblock_alloc_low Date: Fri, 14 Sep 2018 15:10:33 +0300 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1536927045-23536-1-git-send-email-rppt@linux.vnet.ibm.com> References: <1536927045-23536-1-git-send-email-rppt@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 18091412-0012-0000-0000-000002A8D432 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18091412-0013-0000-0000-000020DD1EF4 Message-Id: <1536927045-23536-19-git-send-email-rppt@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2018-09-14_06:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=723 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1809140129 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The alloc_bootmem_low_pages() function allocates PAGE_SIZE aligned regions from low memory. memblock_alloc_low() with alignment set to PAGE_SIZE does exactly the same thing. The conversion is done using the following semantic patch: @@ expression e; @@ - alloc_bootmem_low_pages(e) + memblock_alloc_low(e, PAGE_SIZE) Signed-off-by: Mike Rapoport Acked-by: Michal Hocko --- arch/arc/mm/highmem.c | 2 +- arch/m68k/atari/stram.c | 3 ++- arch/m68k/mm/motorola.c | 5 +++-- arch/mips/cavium-octeon/dma-octeon.c | 2 +- arch/mips/mm/init.c | 3 ++- arch/um/kernel/mem.c | 10 ++++++---- arch/xtensa/mm/mmu.c | 2 +- 7 files changed, 16 insertions(+), 11 deletions(-) diff --git a/arch/arc/mm/highmem.c b/arch/arc/mm/highmem.c index 77ff64a..f582dc8 100644 --- a/arch/arc/mm/highmem.c +++ b/arch/arc/mm/highmem.c @@ -123,7 +123,7 @@ static noinline pte_t * __init alloc_kmap_pgtable(unsigned long kvaddr) pud_k = pud_offset(pgd_k, kvaddr); pmd_k = pmd_offset(pud_k, kvaddr); - pte_k = (pte_t *)alloc_bootmem_low_pages(PAGE_SIZE); + pte_k = (pte_t *)memblock_alloc_low(PAGE_SIZE, PAGE_SIZE); pmd_populate_kernel(&init_mm, pmd_k, pte_k); return pte_k; } diff --git a/arch/m68k/atari/stram.c b/arch/m68k/atari/stram.c index c83d664..1089d67 100644 --- a/arch/m68k/atari/stram.c +++ b/arch/m68k/atari/stram.c @@ -95,7 +95,8 @@ void __init atari_stram_reserve_pages(void *start_mem) { if (kernel_in_stram) { pr_debug("atari_stram pool: kernel in ST-RAM, using alloc_bootmem!\n"); - stram_pool.start = (resource_size_t)alloc_bootmem_low_pages(pool_size); + stram_pool.start = (resource_size_t)memblock_alloc_low(pool_size, + PAGE_SIZE); stram_pool.end = stram_pool.start + pool_size - 1; request_resource(&iomem_resource, &stram_pool); stram_virt_offset = 0; diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index 4e17ecb..8bcf57e 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -55,7 +55,7 @@ static pte_t * __init kernel_page_table(void) { pte_t *ptablep; - ptablep = (pte_t *)alloc_bootmem_low_pages(PAGE_SIZE); + ptablep = (pte_t *)memblock_alloc_low(PAGE_SIZE, PAGE_SIZE); clear_page(ptablep); __flush_page_to_ram(ptablep); @@ -95,7 +95,8 @@ static pmd_t * __init kernel_ptr_table(void) last_pgtable += PTRS_PER_PMD; if (((unsigned long)last_pgtable & ~PAGE_MASK) == 0) { - last_pgtable = (pmd_t *)alloc_bootmem_low_pages(PAGE_SIZE); + last_pgtable = (pmd_t *)memblock_alloc_low(PAGE_SIZE, + PAGE_SIZE); clear_page(last_pgtable); __flush_page_to_ram(last_pgtable); diff --git a/arch/mips/cavium-octeon/dma-octeon.c b/arch/mips/cavium-octeon/dma-octeon.c index 236833b..c44c1a6 100644 --- a/arch/mips/cavium-octeon/dma-octeon.c +++ b/arch/mips/cavium-octeon/dma-octeon.c @@ -244,7 +244,7 @@ void __init plat_swiotlb_setup(void) swiotlb_nslabs = ALIGN(swiotlb_nslabs, IO_TLB_SEGSIZE); swiotlbsize = swiotlb_nslabs << IO_TLB_SHIFT; - octeon_swiotlb = alloc_bootmem_low_pages(swiotlbsize); + octeon_swiotlb = memblock_alloc_low(swiotlbsize, PAGE_SIZE); if (swiotlb_init_with_tbl(octeon_swiotlb, swiotlb_nslabs, 1) == -ENOMEM) panic("Cannot allocate SWIOTLB buffer"); diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c index 400676c..a010fba7 100644 --- a/arch/mips/mm/init.c +++ b/arch/mips/mm/init.c @@ -244,7 +244,8 @@ void __init fixrange_init(unsigned long start, unsigned long end, pmd = (pmd_t *)pud; for (; (k < PTRS_PER_PMD) && (vaddr < end); pmd++, k++) { if (pmd_none(*pmd)) { - pte = (pte_t *) alloc_bootmem_low_pages(PAGE_SIZE); + pte = (pte_t *) memblock_alloc_low(PAGE_SIZE, + PAGE_SIZE); set_pmd(pmd, __pmd((unsigned long)pte)); BUG_ON(pte != pte_offset_kernel(pmd, 0)); } diff --git a/arch/um/kernel/mem.c b/arch/um/kernel/mem.c index 3c0e470..185f6bb 100644 --- a/arch/um/kernel/mem.c +++ b/arch/um/kernel/mem.c @@ -64,7 +64,8 @@ void __init mem_init(void) static void __init one_page_table_init(pmd_t *pmd) { if (pmd_none(*pmd)) { - pte_t *pte = (pte_t *) alloc_bootmem_low_pages(PAGE_SIZE); + pte_t *pte = (pte_t *) memblock_alloc_low(PAGE_SIZE, + PAGE_SIZE); set_pmd(pmd, __pmd(_KERNPG_TABLE + (unsigned long) __pa(pte))); if (pte != pte_offset_kernel(pmd, 0)) @@ -75,7 +76,7 @@ static void __init one_page_table_init(pmd_t *pmd) static void __init one_md_table_init(pud_t *pud) { #ifdef CONFIG_3_LEVEL_PGTABLES - pmd_t *pmd_table = (pmd_t *) alloc_bootmem_low_pages(PAGE_SIZE); + pmd_t *pmd_table = (pmd_t *) memblock_alloc_low(PAGE_SIZE, PAGE_SIZE); set_pud(pud, __pud(_KERNPG_TABLE + (unsigned long) __pa(pmd_table))); if (pmd_table != pmd_offset(pud, 0)) BUG(); @@ -124,7 +125,7 @@ static void __init fixaddr_user_init( void) return; fixrange_init( FIXADDR_USER_START, FIXADDR_USER_END, swapper_pg_dir); - v = (unsigned long) alloc_bootmem_low_pages(size); + v = (unsigned long) memblock_alloc_low(size, PAGE_SIZE); memcpy((void *) v , (void *) FIXADDR_USER_START, size); p = __pa(v); for ( ; size > 0; size -= PAGE_SIZE, vaddr += PAGE_SIZE, @@ -143,7 +144,8 @@ void __init paging_init(void) unsigned long zones_size[MAX_NR_ZONES], vaddr; int i; - empty_zero_page = (unsigned long *) alloc_bootmem_low_pages(PAGE_SIZE); + empty_zero_page = (unsigned long *) memblock_alloc_low(PAGE_SIZE, + PAGE_SIZE); for (i = 0; i < ARRAY_SIZE(zones_size); i++) zones_size[i] = 0; diff --git a/arch/xtensa/mm/mmu.c b/arch/xtensa/mm/mmu.c index 9d1ecfc..f33a1ff 100644 --- a/arch/xtensa/mm/mmu.c +++ b/arch/xtensa/mm/mmu.c @@ -31,7 +31,7 @@ static void * __init init_pmd(unsigned long vaddr, unsigned long n_pages) pr_debug("%s: vaddr: 0x%08lx, n_pages: %ld\n", __func__, vaddr, n_pages); - pte = alloc_bootmem_low_pages(n_pages * sizeof(pte_t)); + pte = memblock_alloc_low(n_pages * sizeof(pte_t), PAGE_SIZE); for (i = 0; i < n_pages; ++i) pte_clear(NULL, 0, pte + i);