From patchwork Thu Jul 30 16:34:30 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 6904461 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 2AEDBC05AC for ; Thu, 30 Jul 2015 16:36:40 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 49A7A20592 for ; Thu, 30 Jul 2015 16:36:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 56A402055A for ; Thu, 30 Jul 2015 16:36:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753874AbbG3QfB (ORCPT ); Thu, 30 Jul 2015 12:35:01 -0400 Received: from mx2.suse.de ([195.135.220.15]:48037 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752775AbbG3Qe7 (ORCPT ); Thu, 30 Jul 2015 12:34:59 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id BE1DBAC25; Thu, 30 Jul 2015 16:34:52 +0000 (UTC) From: Vlastimil Babka To: linux-mm@kvack.org, Andrew Morton Cc: linux-kernel@vger.kernel.org, Mel Gorman , David Rientjes , Greg Thelen , "Aneesh Kumar K.V" , Christoph Lameter , Pekka Enberg , Joonsoo Kim , Naoya Horiguchi , Johannes Weiner , linux-ia64@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, cbe-oss-dev@lists.ozlabs.org, kvm@vger.kernel.org, Vlastimil Babka Subject: [PATCH v3 2/3] mm: unify checks in alloc_pages_node() and __alloc_pages_node() Date: Thu, 30 Jul 2015 18:34:30 +0200 Message-Id: <1438274071-22551-2-git-send-email-vbabka@suse.cz> X-Mailer: git-send-email 2.4.6 In-Reply-To: <1438274071-22551-1-git-send-email-vbabka@suse.cz> References: <1438274071-22551-1-git-send-email-vbabka@suse.cz> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-8.3 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Perform the same debug checks in alloc_pages_node() as are done in __alloc_pages_node(), by making the former function a wrapper of the latter one. In addition to better diagnostics in DEBUG_VM builds for situations which have been already fatal (e.g. out-of-bounds node id), there are two visible changes for potential existing buggy callers of alloc_pages_node(): - calling alloc_pages_node() with any negative nid (e.g. due to arithmetic overflow) was treated as passing NUMA_NO_NODE and fallback to local node was applied. This will now be fatal. - calling alloc_pages_node() with an offline node will now be checked for DEBUG_VM builds. Since it's not fatal if the node has been previously online, and this patch may expose some existing buggy callers, change the VM_BUG_ON in __alloc_pages_node() to VM_WARN_ON. Signed-off-by: Vlastimil Babka Acked-by: David Rientjes Acked-by: Johannes Weiner Acked-by: Christoph Lameter --- v3: I think this is enough for what patch 4/4 in v2 tried to provide on top, so there's no patch 4/4 anymore. The DEBUG_VM-only fixup didn't seem justified to me. include/linux/gfp.h | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index d2c142b..4a12cae2 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -310,23 +310,23 @@ __alloc_pages(gfp_t gfp_mask, unsigned int order, static inline struct page * __alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order) { - VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES || !node_online(nid)); + VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); + VM_WARN_ON(!node_online(nid)); return __alloc_pages(gfp_mask, order, node_zonelist(nid, gfp_mask)); } /* * Allocate pages, preferring the node given as nid. When nid == NUMA_NO_NODE, - * prefer the current CPU's node. + * prefer the current CPU's node. Otherwise node must be valid and online. */ static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order) { - /* Unknown node is current node */ - if (nid < 0) + if (nid == NUMA_NO_NODE) nid = numa_node_id(); - return __alloc_pages(gfp_mask, order, node_zonelist(nid, gfp_mask)); + return __alloc_pages_node(nid, gfp_mask, order); } #ifdef CONFIG_NUMA