From patchwork Tue Jan 21 18:01:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11344379 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D40E5921 for ; Tue, 21 Jan 2020 18:02:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A04DF24125 for ; Tue, 21 Jan 2020 18:02:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ZQKJhL62" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A04DF24125 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CE34B6B026C; Tue, 21 Jan 2020 13:02:47 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C6DC96B026D; Tue, 21 Jan 2020 13:02:47 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE7B06B026E; Tue, 21 Jan 2020 13:02:47 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0092.hostedemail.com [216.40.44.92]) by kanga.kvack.org (Postfix) with ESMTP id 91D3C6B026C for ; Tue, 21 Jan 2020 13:02:47 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 5ABA145A6 for ; Tue, 21 Jan 2020 18:02:47 +0000 (UTC) X-FDA: 76402411974.29.rifle87_6f9dc90f64660 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,david@redhat.com,:stable@vger.kernel.org::mhocko@suse.com:gregkh@linuxfoundation.org:akpm@linux-foundation.org:aneesh.kumar@linux.ibm.com:bhe@redhat.com:dan.j.williams@intel.com:osalvador@suse.de:richard.weiyang@gmail.com:david@redhat.com,RULES_HIT:30054,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:28,LUA_SUMMARY:none X-HE-Tag: rifle87_6f9dc90f64660 X-Filterd-Recvd-Size: 4203 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [205.139.110.61]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Tue, 21 Jan 2020 18:02:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1579629766; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uBgTreueQ4GMfX9dOgeNYJcp9dN6Z4IlmuREQf3D2BY=; b=ZQKJhL62MhyR/hArsWFxuwMGnbXQIFM68nOLzpYAg7EcdYujY0+5Ak+r6WxJE8K0HZx/SR zemOhwr6P+ARos3dHEdzydi7Z4kPWlSZbCseAgGobiYeLVVvBwLBB41m1DRFd3zdWNy29m f/YRKZAQVO1uceqNHxE7nk/JE90IP+g= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-56-5ync-iL_OpKbbd1Xqb266g-1; Tue, 21 Jan 2020 13:02:42 -0500 X-MC-Unique: 5ync-iL_OpKbbd1Xqb266g-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5F805107ACC4; Tue, 21 Jan 2020 18:02:40 +0000 (UTC) Received: from t480s.redhat.com (unknown [10.36.118.56]) by smtp.corp.redhat.com (Postfix) with ESMTP id 374345C3FD; Tue, 21 Jan 2020 18:02:38 +0000 (UTC) From: David Hildenbrand To: stable@vger.kernel.org Cc: linux-mm@kvack.org, Michal Hocko , Greg Kroah-Hartman , Andrew Morton , "Aneesh Kumar K . V" , Baoquan He , Dan Williams , Oscar Salvador , Wei Yang , David Hildenbrand Subject: [PATCH for 4.19-stable v2 11/24] powerpc/mm: Fix section mismatch warning Date: Tue, 21 Jan 2020 19:01:37 +0100 Message-Id: <20200121180150.37454-12-david@redhat.com> In-Reply-To: <20200121180150.37454-1-david@redhat.com> References: <20200121180150.37454-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Aneesh Kumar K.V" commit 26ad26718dfaa7cf49d106d212ebf2370076c253 upstream. This patch fix the below section mismatch warnings. WARNING: vmlinux.o(.text+0x2d1f44): Section mismatch in reference from the function devm_memremap_pages_release() to the function .meminit.text:arch_remove_memory() WARNING: vmlinux.o(.text+0x2d265c): Section mismatch in reference from the function devm_memremap_pages() to the function .meminit.text:arch_add_memory() Signed-off-by: Aneesh Kumar K.V Signed-off-by: Michael Ellerman Signed-off-by: David Hildenbrand --- arch/powerpc/mm/mem.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 1b6e0ef5d14d..625d78547fe7 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -118,8 +118,8 @@ int __weak remove_section_mapping(unsigned long start, unsigned long end) return -ENODEV; } -int __meminit arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, - bool want_memblock) +int __ref arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, + bool want_memblock) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; @@ -140,8 +140,8 @@ int __meminit arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap * } #ifdef CONFIG_MEMORY_HOTREMOVE -int __meminit arch_remove_memory(int nid, u64 start, u64 size, - struct vmem_altmap *altmap) +int __ref arch_remove_memory(int nid, u64 start, u64 size, + struct vmem_altmap *altmap) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT;