From patchwork Tue Apr 26 11:56:12 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 8938021 Return-Path: X-Original-To: patchwork-ceph-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 29632BF29F for ; Tue, 26 Apr 2016 11:56:47 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3FD0E201E4 for ; Tue, 26 Apr 2016 11:56:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AAECA20142 for ; Tue, 26 Apr 2016 11:56:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752781AbcDZL4g (ORCPT ); Tue, 26 Apr 2016 07:56:36 -0400 Received: from mail-wm0-f68.google.com ([74.125.82.68]:34160 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752696AbcDZL4Z (ORCPT ); Tue, 26 Apr 2016 07:56:25 -0400 Received: by mail-wm0-f68.google.com with SMTP id n3so4244111wmn.1; Tue, 26 Apr 2016 04:56:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=FiW2XwQJJUh1ZSfUbENsOKrYpphA46lldaop3/rxBog=; b=Nsqm2hPbVro+CeEv0kgo+1E7jaUYefK1vBlirBE6pWwN/2rveSuIOOC08TKB3PCOO8 galSP0DfAxO1Gs3/IGfmaXnIaF13EhlANKKAprVOaQhIBhBMVVNitUPJzX1o1hAkyL1L rSS/CLL6rbls198h56AZ3+mz4I5+8UW0Iv1c6q/SiXB1G+E1oE4FoVnOZ8/lo8OhQ6ZY ovwHTdopzIC2BQXU71Si3BDEW7Nyp9zvCHdxXCFpeWsoVDPQxVrGEYW8Pa3N/HxPVdQV zBK/QUu+6a9PceaEMfVSJLI5c6fAl+42JKP9RNNAS6KHG1eX+pwcyRpmwgoQZ75zrvqj AQMg== X-Gm-Message-State: AOPr4FUX0qCH6Dh1MmK2fidAVbmXWGhYCnnxr1dNDUsTKBwi/CTw0eHs0IVgG08j+gMh3w== X-Received: by 10.194.113.3 with SMTP id iu3mr3045131wjb.149.1461671783136; Tue, 26 Apr 2016 04:56:23 -0700 (PDT) Received: from tiehlicka.suse.cz (nat1.scz.suse.com. [213.151.88.250]) by smtp.gmail.com with ESMTPSA id b12sm20343939wmb.0.2016.04.26.04.56.22 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 26 Apr 2016 04:56:22 -0700 (PDT) From: Michal Hocko To: , Cc: Andrew Morton , Dave Chinner , "Theodore Ts'o" , Chris Mason , Jan Kara , ceph-devel@vger.kernel.org, cluster-devel@redhat.com, linux-nfs@vger.kernel.org, logfs@logfs.org, xfs@oss.sgi.com, linux-ext4@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-mtd@lists.infradead.org, reiserfs-devel@vger.kernel.org, linux-ntfs-dev@lists.sourceforge.net, linux-f2fs-devel@lists.sourceforge.net, linux-afs@lists.infradead.org, LKML , Michal Hocko Subject: [PATCH 2/2] mm, debug: report when GFP_NO{FS, IO} is used explicitly from memalloc_no{fs, io}_{save, restore} context Date: Tue, 26 Apr 2016 13:56:12 +0200 Message-Id: <1461671772-1269-3-git-send-email-mhocko@kernel.org> X-Mailer: git-send-email 2.8.0.rc3 In-Reply-To: <1461671772-1269-1-git-send-email-mhocko@kernel.org> References: <1461671772-1269-1-git-send-email-mhocko@kernel.org> Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org X-Spam-Status: No, score=-7.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Michal Hocko THIS PATCH IS FOR TESTING ONLY AND NOT MEANT TO HIT LINUS TREE It is desirable to reduce the direct GFP_NO{FS,IO} usage at minimum and prefer scope usage defined by memalloc_no{fs,io}_{save,restore} API. Let's help this process and add a debugging tool to catch when an explicit allocation request for GFP_NO{FS,IO} is done from the scope context. The printed stacktrace should help to identify the caller and evaluate whether it can be changed to use a wider context or whether it is called from another potentially dangerous context which needs a scope protection as well. The checks have to be enabled explicitly by debug_scope_gfp kernel command line parameter. Signed-off-by: Michal Hocko --- mm/page_alloc.c | 56 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 56 insertions(+) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 86bb5d6ddd7d..085d00280496 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3750,6 +3750,61 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, return page; } +static bool debug_scope_gfp; + +static int __init enable_debug_scope_gfp(char *unused) +{ + debug_scope_gfp = true; + return 0; +} + +/* + * spit the stack trace if the given gfp_mask clears flags which are context + * wide cleared. Such a caller can remove special flags clearing and rely on + * the context wide mask. + */ +static inline void debug_scope_gfp_context(gfp_t gfp_mask) +{ + gfp_t restrict_mask; + + if (likely(!debug_scope_gfp)) + return; + + /* both NOFS, NOIO are irrelevant when direct reclaim is disabled */ + if (!(gfp_mask & __GFP_DIRECT_RECLAIM)) + return; + + if (current->flags & PF_MEMALLOC_NOIO) + restrict_mask = __GFP_IO; + else if ((current->flags & PF_MEMALLOC_NOFS) && (gfp_mask & __GFP_IO)) + restrict_mask = __GFP_FS; + else + return; + + if ((gfp_mask & restrict_mask) != restrict_mask) { + /* + * If you see this this warning then the code does: + * memalloc_no{fs,io}_save() + * ... + * foo() + * alloc_page(GFP_NO{FS,IO}) + * ... + * memalloc_no{fs,io}_restore() + * + * allocation which is unnecessary because the scope gfp + * context will do that for all allocation requests already. + * If foo() is called from multiple contexts then make sure other + * contexts are safe wrt. GFP_NO{FS,IO} semantic and either add + * scope protection into particular paths or change the gfp mask + * to GFP_KERNEL. + */ + pr_info("Unnecesarily specific gfp mask:%#x(%pGg) for the %s task wide context\n", gfp_mask, &gfp_mask, + (current->flags & PF_MEMALLOC_NOIO)?"NOIO":"NOFS"); + dump_stack(); + } +} +early_param("debug_scope_gfp", enable_debug_scope_gfp); + /* * This is the 'heart' of the zoned buddy allocator. */ @@ -3796,6 +3851,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, ac.nodemask); /* First allocation attempt */ + debug_scope_gfp_context(gfp_mask); page = get_page_from_freelist(alloc_mask, order, alloc_flags, &ac); if (likely(page)) goto out;