From patchwork Wed Apr 1 19:40:07 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 6141881 X-Patchwork-Delegate: horms@verge.net.au Return-Path: X-Original-To: patchwork-linux-sh@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 561C99F2EC for ; Wed, 1 Apr 2015 19:40:12 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 72E192034F for ; Wed, 1 Apr 2015 19:40:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8428220220 for ; Wed, 1 Apr 2015 19:40:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752910AbbDATkK (ORCPT ); Wed, 1 Apr 2015 15:40:10 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:33781 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753197AbbDATkI (ORCPT ); Wed, 1 Apr 2015 15:40:08 -0400 Received: from akpm3.mtv.corp.google.com (unknown [216.239.45.95]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 8CAD2B16; Wed, 1 Apr 2015 19:40:07 +0000 (UTC) Date: Wed, 1 Apr 2015 12:40:07 -0700 From: Andrew Morton To: Marc Zyngier Cc: Geert Uytterhoeven , Kevin Hilman , Ard Biesheuvel , Will Deacon , Simon Horman , Tyler Baker , Nishanth Menon , Russell King - ARM Linux , Arnd Bergmann , "linux-sh@vger.kernel.org" , Catalin Marinas , Magnus Damm , "grygorii.strashko@linaro.org" , "linux-omap@vger.kernel.org" , "linux-arm-kernel@lists.infradead.org" , Linux Kernel Development , "linux-mm@kvack.org" Subject: Re: [PATCH] mm/migrate: Mark unmap_and_move() "noinline" to avoid ICE in gcc 4.7.3 Message-Id: <20150401124007.20c440cc43a482f698f461b8@linux-foundation.org> In-Reply-To: <551BBEC5.7070801@arm.com> References: <20150324004537.GA24816@verge.net.au> <20150324161358.GA694@kahuna> <20150326003939.GA25368@verge.net.au> <20150326133631.GB2805@arm.com> <20150327002554.GA5527@verge.net.au> <20150327100612.GB1562@arm.com> <7hbnj99epe.fsf@deeprootsystems.com> <7h8uec95t2.fsf@deeprootsystems.com> <551BBEC5.7070801@arm.com> X-Mailer: Sylpheed 3.4.1 (GTK+ 2.24.23; x86_64-pc-linux-gnu) Mime-Version: 1.0 Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Wed, 01 Apr 2015 10:47:49 +0100 Marc Zyngier wrote: > > -static int unmap_and_move(new_page_t get_new_page, free_page_t put_new_page, > > - unsigned long private, struct page *page, int force, > > - enum migrate_mode mode) > > +static noinline int unmap_and_move(new_page_t get_new_page, > > + free_page_t put_new_page, > > + unsigned long private, struct page *page, > > + int force, enum migrate_mode mode) > > { > > int rc = 0; > > int *result = NULL; > > > > Ouch. That's really ugly. And on 32bit ARM, we end-up spilling half of > the parameters on the stack, which is not going to help performance > either (not that this would be useful on 32bit ARM anyway...). > > Any chance you could make this dependent on some compiler detection > mechanism? With my arm compiler (gcc-4.4.4) the patch makes no difference - unmap_and_move() isn't being inlined anyway. How does this look? Kevin, could you please retest? I might have fat-fingered something... --- a/mm/migrate.c~mm-migrate-mark-unmap_and_move-noinline-to-avoid-ice-in-gcc-473-fix +++ a/mm/migrate.c @@ -901,10 +901,20 @@ out: } /* + * gcc-4.7.3 on arm gets an ICE when inlining unmap_and_move(). Work around + * it. + */ +#if GCC_VERSION == 40703 && defined(CONFIG_ARM) +#define ICE_noinline noinline +#else +#define ICE_noinline +#endif + +/* * Obtain the lock on page, remove all ptes and migrate the page * to the newly allocated page in newpage. */ -static noinline int unmap_and_move(new_page_t get_new_page, +static ICE_noinline int unmap_and_move(new_page_t get_new_page, free_page_t put_new_page, unsigned long private, struct page *page, int force, enum migrate_mode mode)