From patchwork Wed Nov 25 16:25:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 11931487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BF5DC64E7D for ; Wed, 25 Nov 2020 16:25:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7F7232067C for ; Wed, 25 Nov 2020 16:25:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=ffwll.ch header.i=@ffwll.ch header.b="QMYNGEeA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7F7232067C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ffwll.ch Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A0AA56B0078; Wed, 25 Nov 2020 11:25:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 91E786B007B; Wed, 25 Nov 2020 11:25:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7EA996B007E; Wed, 25 Nov 2020 11:25:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0192.hostedemail.com [216.40.44.192]) by kanga.kvack.org (Postfix) with ESMTP id 529D06B0078 for ; Wed, 25 Nov 2020 11:25:44 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 1F6A71EE6 for ; Wed, 25 Nov 2020 16:25:44 +0000 (UTC) X-FDA: 77523466608.30.watch00_4b113d727378 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id CE3ED180B3C8E for ; Wed, 25 Nov 2020 16:25:43 +0000 (UTC) X-HE-Tag: watch00_4b113d727378 X-Filterd-Recvd-Size: 8285 Received: from mail-wm1-f68.google.com (mail-wm1-f68.google.com [209.85.128.68]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Nov 2020 16:25:42 +0000 (UTC) Received: by mail-wm1-f68.google.com with SMTP id p22so2509080wmg.3 for ; Wed, 25 Nov 2020 08:25:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UNSb6Wp1AZE4VV1tRwOeD55taeEGpBqVtqBoZlJ/7FM=; b=QMYNGEeA4YfhfQORAYeqBZnZcsSUcenw0mAE059wCrKDDwvxV6pkAgR4/U23dLWec0 RevBzZUbIN+mA+zBM07VEiG7Y25QELLdsH60N6ottEwiWDuwJQpLTRsg7/QipBmILxir xUhl44+qhKqBTkmVewpxVXpAPVi4yZQtwpPgw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UNSb6Wp1AZE4VV1tRwOeD55taeEGpBqVtqBoZlJ/7FM=; b=XFdewc8c686xCg/mfUIJ7Ri8i914GMOpa4i973gxyEKFe33xNisoWSF63RCfrQGysq HbVwDtBeInWxT3Wi2DmTREwfCBqHwTniQZBN6CdgtGJSxvbJgmz4dM9iNtKpahaF2s8o T9TgAFOFeFnSRlJipzu8irmjIrbXcTNh2LNP/3wdzRDIMNyRJmHdWIH6c8vf0UB5/h0h 2ySPyzZLOyOVs5ePBVrxG/OuLQ2LDier3x/2JxyNNpAyznKOJgaUZBXmhIcGNnJN6a74 YxTRHUybHp02kDeayuYhZ50awntCY8XKq/zeDN2sOnYfkX72WPkk5+/s0ywLTAJlbsrJ 9ZZQ== X-Gm-Message-State: AOAM531Kth5OF8jJlZvb+GbfgLCgoG1ZQZPlVqB38SkNoTt33r6dZhhz hqrCXNdD6eXmCw6n847ot3YNEg== X-Google-Smtp-Source: ABdhPJy7GHHP/Ip7I6Y1Tn4QI7dFjHyZUmuEZ7U3OT2GK1eKINNZ3obuKDnLln9NKOQlPoUBDiVX7A== X-Received: by 2002:a1c:6209:: with SMTP id w9mr4886901wmb.120.1606321541410; Wed, 25 Nov 2020 08:25:41 -0800 (PST) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id a21sm4855187wmb.38.2020.11.25.08.25.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Nov 2020 08:25:40 -0800 (PST) From: Daniel Vetter To: DRI Development Cc: Intel Graphics Development , linux-mm@kvack.org, linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, LKML , Daniel Vetter , Vlastimil Babka , "Paul E . McKenney" , Jason Gunthorpe , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Peter Zijlstra , Ingo Molnar , Mathieu Desnoyers , Sebastian Andrzej Siewior , Michel Lespinasse , Waiman Long , Thomas Gleixner , Randy Dunlap , Dave Chinner , Qian Cai , "Matthew Wilcox (Oracle)" , Daniel Vetter Subject: [PATCH v4 2/3] mm: Extract might_alloc() debug check Date: Wed, 25 Nov 2020 17:25:30 +0100 Message-Id: <20201125162532.1299794-3-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201125162532.1299794-1-daniel.vetter@ffwll.ch> References: <20201125162532.1299794-1-daniel.vetter@ffwll.ch> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Extracted from slab.h, which seems to have the most complete version including the correct might_sleep() check. Roll it out to slob.c. Motivated by a discussion with Paul about possibly changing call_rcu behaviour to allocate memory, but only roughly every 500th call. There are a lot fewer places in the kernel that care about whether allocating memory is allowed or not (due to deadlocks with reclaim code) than places that care whether sleeping is allowed. But debugging these also tends to be a lot harder, so nice descriptive checks could come in handy. I might have some use eventually for annotations in drivers/gpu. Note that unlike fs_reclaim_acquire/release gfpflags_allow_blocking does not consult the PF_MEMALLOC flags. But there is no flag equivalent for GFP_NOWAIT, hence this check can't go wrong due to memalloc_no*_save/restore contexts. Willy is working on a patch series which might change this: https://lore.kernel.org/linux-mm/20200625113122.7540-7-willy@infradead.org/ I think best would be if that updates gfpflags_allow_blocking(), since there's a ton of callers all over the place for that already. v2: Fix typos in kerneldoc (Randy) Acked-by: Vlastimil Babka Acked-by: Paul E. McKenney Reviewed-by: Jason Gunthorpe Cc. Randy Dunlap Cc: Paul E. McKenney Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Andrew Morton Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Vlastimil Babka Cc: Mathieu Desnoyers Cc: Sebastian Andrzej Siewior Cc: Michel Lespinasse Cc: Daniel Vetter Cc: Waiman Long Cc: Thomas Gleixner Cc: Randy Dunlap Cc: linux-mm@kvack.org Cc: linux-fsdevel@vger.kernel.org Cc: Dave Chinner Cc: Qian Cai Cc: linux-xfs@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Signed-off-by: Daniel Vetter --- include/linux/sched/mm.h | 16 ++++++++++++++++ mm/slab.h | 5 +---- mm/slob.c | 6 ++---- 3 files changed, 19 insertions(+), 8 deletions(-) diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index d5ece7a9a403..a11a61b5226f 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -180,6 +180,22 @@ static inline void fs_reclaim_acquire(gfp_t gfp_mask) { } static inline void fs_reclaim_release(gfp_t gfp_mask) { } #endif +/** + * might_alloc - Mark possible allocation sites + * @gfp_mask: gfp_t flags that would be used to allocate + * + * Similar to might_sleep() and other annotations, this can be used in functions + * that might allocate, but often don't. Compiles to nothing without + * CONFIG_LOCKDEP. Includes a conditional might_sleep() if @gfp allows blocking. + */ +static inline void might_alloc(gfp_t gfp_mask) +{ + fs_reclaim_acquire(gfp_mask); + fs_reclaim_release(gfp_mask); + + might_sleep_if(gfpflags_allow_blocking(gfp_mask)); +} + /** * memalloc_noio_save - Marks implicit GFP_NOIO allocation scope. * diff --git a/mm/slab.h b/mm/slab.h index 6d7c6a5056ba..37b981247e5d 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -500,10 +500,7 @@ static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, { flags &= gfp_allowed_mask; - fs_reclaim_acquire(flags); - fs_reclaim_release(flags); - - might_sleep_if(gfpflags_allow_blocking(flags)); + might_alloc(flags); if (should_failslab(s, flags)) return NULL; diff --git a/mm/slob.c b/mm/slob.c index 7cc9805c8091..8d4bfa46247f 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -474,8 +474,7 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller) gfp &= gfp_allowed_mask; - fs_reclaim_acquire(gfp); - fs_reclaim_release(gfp); + might_alloc(gfp); if (size < PAGE_SIZE - minalign) { int align = minalign; @@ -597,8 +596,7 @@ static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node) flags &= gfp_allowed_mask; - fs_reclaim_acquire(flags); - fs_reclaim_release(flags); + might_alloc(flags); if (c->size < PAGE_SIZE) { b = slob_alloc(c->size, flags, c->align, node, 0);