From patchwork Mon Dec 31 09:29:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 10745383 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 12C4013A4 for ; Mon, 31 Dec 2018 09:31:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F253228A62 for ; Mon, 31 Dec 2018 09:31:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E335628A75; Mon, 31 Dec 2018 09:31:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3BFB628A62 for ; Mon, 31 Dec 2018 09:31:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:Message-Id: References:In-Reply-To:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=7XoCtcKSsUQUy//qR1mG9kheW90R2kfaPtRTxMclEBo=; b=NXzY/OMvuNEfgbO1BLxjOF1BnY po7jkokHYJg2hO6D66wgCzlMcqpDVorlsNbOUpiTxOndXmN+a8ANXvfa3SwPdV3LqRQkkcmmAy5O+ Ghe+vhjoVycs3wpxxu4k+xpBms3Qg2mIKh4JjjoitqPQM/+6mje/MiqQO+il7OQgck4sN/vqenGyh 4L1IEf+qp4ZElpBSuI/yGVU+4ZM2xVWjaSgSl+HFzpOUsX2geDk/8KR33p0xLt7ohcDhyWInkejVC fiB9TOoqVhI9lj1g49Gq59i6mGQoe0XiJrv+xjU9KQpFK+JjxVgkHl4DycBokID2DzumW7UbaR7uj 9peC59kA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gdtul-0002ZM-Vd; Mon, 31 Dec 2018 09:31:15 +0000 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5] helo=mx0a-001b2d01.pphosted.com) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1gdttf-0000Zt-Oh for linux-arm-kernel@lists.infradead.org; Mon, 31 Dec 2018 09:30:19 +0000 Received: from pps.filterd (m0098414.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id wBV9Sh2F012474 for ; Mon, 31 Dec 2018 04:30:07 -0500 Received: from e06smtp05.uk.ibm.com (e06smtp05.uk.ibm.com [195.75.94.101]) by mx0b-001b2d01.pphosted.com with ESMTP id 2pqftghfpf-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Mon, 31 Dec 2018 04:30:06 -0500 Received: from localhost by e06smtp05.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 31 Dec 2018 09:30:04 -0000 Received: from b06cxnps4074.portsmouth.uk.ibm.com (9.149.109.196) by e06smtp05.uk.ibm.com (192.168.101.135) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Mon, 31 Dec 2018 09:29:57 -0000 Received: from d06av21.portsmouth.uk.ibm.com (d06av21.portsmouth.uk.ibm.com [9.149.105.232]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id wBV9TuI12621826 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Mon, 31 Dec 2018 09:29:56 GMT Received: from d06av21.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E3AFE52051; Mon, 31 Dec 2018 09:29:55 +0000 (GMT) Received: from rapoport-lnx (unknown [9.148.207.181]) by d06av21.portsmouth.uk.ibm.com (Postfix) with ESMTPS id 4EC175205A; Mon, 31 Dec 2018 09:29:52 +0000 (GMT) Received: by rapoport-lnx (sSMTP sendmail emulation); Mon, 31 Dec 2018 11:29:51 +0200 From: Mike Rapoport To: Andrew Morton Subject: [PATCH v4 5/6] arch: simplify several early memory allocations Date: Mon, 31 Dec 2018 11:29:25 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1546248566-14910-1-git-send-email-rppt@linux.ibm.com> References: <1546248566-14910-1-git-send-email-rppt@linux.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 18123109-0020-0000-0000-000002FE3659 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18123109-0021-0000-0000-0000214E6FCD Message-Id: <1546248566-14910-6-git-send-email-rppt@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2018-12-31_06:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1812310088 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181231_013008_469614_DD46A55C X-CRM114-Status: GOOD ( 29.77 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Michal Hocko , linux-sh@vger.kernel.org, Benjamin Herrenschmidt , Heiko Carstens , linux-mm@kvack.org, Rich Felker , Paul Mackerras , sparclinux@vger.kernel.org, Vincent Chen , Jonas Bonn , linux-s390@vger.kernel.org, linux-c6x-dev@linux-c6x.org, Yoshinori Sato , Michael Ellerman , Russell King , Mike Rapoport , Mark Salter , Arnd Bergmann , Stefan Kristiansson , openrisc@lists.librecores.org, Greentime Hu , Stafford Horne , Guan Xuetao , linux-arm-kernel@lists.infradead.org, Michal Simek , linux-kernel@vger.kernel.org, Martin Schwidefsky , linuxppc-dev@lists.ozlabs.org, "David S. Miller" MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP There are several early memory allocations in arch/ code that use memblock_phys_alloc() to allocate memory, convert the returned physical address to the virtual address and then set the allocated memory to zero. Exactly the same behaviour can be achieved simply by calling memblock_alloc(): it allocates the memory in the same way as memblock_phys_alloc(), then it performs the phys_to_virt() conversion and clears the allocated memory. Replace the longer sequence with a simpler call to memblock_alloc(). Signed-off-by: Mike Rapoport --- arch/arm/mm/mmu.c | 4 +--- arch/c6x/mm/dma-coherent.c | 9 ++------- arch/nds32/mm/init.c | 12 ++++-------- arch/powerpc/kernel/setup-common.c | 4 ++-- arch/powerpc/mm/ppc_mmu_32.c | 3 +-- arch/powerpc/platforms/powernv/opal.c | 3 +-- arch/s390/numa/numa.c | 6 +----- arch/sparc/kernel/prom_64.c | 7 ++----- arch/sparc/mm/init_64.c | 9 +++------ arch/unicore32/mm/mmu.c | 4 +--- 10 files changed, 18 insertions(+), 43 deletions(-) diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index f5cc1cc..0a04c9a5 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -721,9 +721,7 @@ EXPORT_SYMBOL(phys_mem_access_prot); static void __init *early_alloc_aligned(unsigned long sz, unsigned long align) { - void *ptr = __va(memblock_phys_alloc(sz, align)); - memset(ptr, 0, sz); - return ptr; + return memblock_alloc(sz, align); } static void __init *early_alloc(unsigned long sz) diff --git a/arch/c6x/mm/dma-coherent.c b/arch/c6x/mm/dma-coherent.c index 75b7957..0be2898 100644 --- a/arch/c6x/mm/dma-coherent.c +++ b/arch/c6x/mm/dma-coherent.c @@ -121,8 +121,6 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr, */ void __init coherent_mem_init(phys_addr_t start, u32 size) { - phys_addr_t bitmap_phys; - if (!size) return; @@ -138,11 +136,8 @@ void __init coherent_mem_init(phys_addr_t start, u32 size) if (dma_size & (PAGE_SIZE - 1)) ++dma_pages; - bitmap_phys = memblock_phys_alloc(BITS_TO_LONGS(dma_pages) * sizeof(long), - sizeof(long)); - - dma_bitmap = phys_to_virt(bitmap_phys); - memset(dma_bitmap, 0, dma_pages * PAGE_SIZE); + dma_bitmap = memblock_alloc(BITS_TO_LONGS(dma_pages) * sizeof(long), + sizeof(long)); } static void c6x_dma_sync(struct device *dev, phys_addr_t paddr, size_t size, diff --git a/arch/nds32/mm/init.c b/arch/nds32/mm/init.c index 253f79f..d1e521c 100644 --- a/arch/nds32/mm/init.c +++ b/arch/nds32/mm/init.c @@ -78,8 +78,7 @@ static void __init map_ram(void) } /* Alloc one page for holding PTE's... */ - pte = (pte_t *) __va(memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE)); - memset(pte, 0, PAGE_SIZE); + pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE); set_pmd(pme, __pmd(__pa(pte) + _PAGE_KERNEL_TABLE)); /* Fill the newly allocated page with PTE'S */ @@ -111,8 +110,7 @@ static void __init fixedrange_init(void) pgd = swapper_pg_dir + pgd_index(vaddr); pud = pud_offset(pgd, vaddr); pmd = pmd_offset(pud, vaddr); - fixmap_pmd_p = (pmd_t *) __va(memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE)); - memset(fixmap_pmd_p, 0, PAGE_SIZE); + fixmap_pmd_p = memblock_alloc(PAGE_SIZE, PAGE_SIZE); set_pmd(pmd, __pmd(__pa(fixmap_pmd_p) + _PAGE_KERNEL_TABLE)); #ifdef CONFIG_HIGHMEM @@ -124,8 +122,7 @@ static void __init fixedrange_init(void) pgd = swapper_pg_dir + pgd_index(vaddr); pud = pud_offset(pgd, vaddr); pmd = pmd_offset(pud, vaddr); - pte = (pte_t *) __va(memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE)); - memset(pte, 0, PAGE_SIZE); + pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE); set_pmd(pmd, __pmd(__pa(pte) + _PAGE_KERNEL_TABLE)); pkmap_page_table = pte; #endif /* CONFIG_HIGHMEM */ @@ -150,8 +147,7 @@ void __init paging_init(void) fixedrange_init(); /* allocate space for empty_zero_page */ - zero_page = __va(memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE)); - memset(zero_page, 0, PAGE_SIZE); + zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE); zone_sizes_init(); empty_zero_page = virt_to_page(zero_page); diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c index ca00fbb..82be48c 100644 --- a/arch/powerpc/kernel/setup-common.c +++ b/arch/powerpc/kernel/setup-common.c @@ -459,8 +459,8 @@ void __init smp_setup_cpu_maps(void) DBG("smp_setup_cpu_maps()\n"); - cpu_to_phys_id = __va(memblock_phys_alloc(nr_cpu_ids * sizeof(u32), __alignof__(u32))); - memset(cpu_to_phys_id, 0, nr_cpu_ids * sizeof(u32)); + cpu_to_phys_id = memblock_alloc(nr_cpu_ids * sizeof(u32), + __alignof__(u32)); for_each_node_by_type(dn, "cpu") { const __be32 *intserv; diff --git a/arch/powerpc/mm/ppc_mmu_32.c b/arch/powerpc/mm/ppc_mmu_32.c index 3f41932..36a664f 100644 --- a/arch/powerpc/mm/ppc_mmu_32.c +++ b/arch/powerpc/mm/ppc_mmu_32.c @@ -211,8 +211,7 @@ void __init MMU_init_hw(void) * Find some memory for the hash table. */ if ( ppc_md.progress ) ppc_md.progress("hash:find piece", 0x322); - Hash = __va(memblock_phys_alloc(Hash_size, Hash_size)); - memset(Hash, 0, Hash_size); + Hash = memblock_alloc(Hash_size, Hash_size); _SDR1 = __pa(Hash) | SDR1_LOW_BITS; Hash_end = (struct hash_pte *) ((unsigned long)Hash + Hash_size); diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c index 79586f1..8e157f9 100644 --- a/arch/powerpc/platforms/powernv/opal.c +++ b/arch/powerpc/platforms/powernv/opal.c @@ -171,8 +171,7 @@ int __init early_init_dt_scan_recoverable_ranges(unsigned long node, /* * Allocate a buffer to hold the MC recoverable ranges. */ - mc_recoverable_range =__va(memblock_phys_alloc(size, __alignof__(u64))); - memset(mc_recoverable_range, 0, size); + mc_recoverable_range = memblock_alloc(size, __alignof__(u64)); for (i = 0; i < mc_recoverable_range_len; i++) { mc_recoverable_range[i].start_addr = diff --git a/arch/s390/numa/numa.c b/arch/s390/numa/numa.c index d31bde0..2281a88 100644 --- a/arch/s390/numa/numa.c +++ b/arch/s390/numa/numa.c @@ -62,11 +62,7 @@ int numa_debug_enabled; */ static __init pg_data_t *alloc_node_data(void) { - pg_data_t *res; - - res = (pg_data_t *) memblock_phys_alloc(sizeof(pg_data_t), 8); - memset(res, 0, sizeof(pg_data_t)); - return res; + return memblock_alloc(sizeof(pg_data_t), 8); } /* diff --git a/arch/sparc/kernel/prom_64.c b/arch/sparc/kernel/prom_64.c index e897a4d..c50ff1f 100644 --- a/arch/sparc/kernel/prom_64.c +++ b/arch/sparc/kernel/prom_64.c @@ -34,16 +34,13 @@ void * __init prom_early_alloc(unsigned long size) { - unsigned long paddr = memblock_phys_alloc(size, SMP_CACHE_BYTES); - void *ret; + void *ret = memblock_alloc(size, SMP_CACHE_BYTES); - if (!paddr) { + if (!ret) { prom_printf("prom_early_alloc(%lu) failed\n", size); prom_halt(); } - ret = __va(paddr); - memset(ret, 0, size); prom_early_allocated += size; return ret; diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 3c8aac2..52884f4 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -1089,16 +1089,13 @@ static void __init allocate_node_data(int nid) struct pglist_data *p; unsigned long start_pfn, end_pfn; #ifdef CONFIG_NEED_MULTIPLE_NODES - unsigned long paddr; - paddr = memblock_phys_alloc_try_nid(sizeof(struct pglist_data), - SMP_CACHE_BYTES, nid); - if (!paddr) { + NODE_DATA(nid) = memblock_alloc_node(sizeof(struct pglist_data), + SMP_CACHE_BYTES, nid); + if (!NODE_DATA(nid)) { prom_printf("Cannot allocate pglist_data for nid[%d]\n", nid); prom_halt(); } - NODE_DATA(nid) = __va(paddr); - memset(NODE_DATA(nid), 0, sizeof(struct pglist_data)); NODE_DATA(nid)->node_id = nid; #endif diff --git a/arch/unicore32/mm/mmu.c b/arch/unicore32/mm/mmu.c index 040a8c2..50d8c1a 100644 --- a/arch/unicore32/mm/mmu.c +++ b/arch/unicore32/mm/mmu.c @@ -143,9 +143,7 @@ static void __init build_mem_type_table(void) static void __init *early_alloc(unsigned long sz) { - void *ptr = __va(memblock_phys_alloc(sz, sz)); - memset(ptr, 0, sz); - return ptr; + return memblock_alloc(sz, sz); } static pte_t * __init early_pte_alloc(pmd_t *pmd, unsigned long addr,