From patchwork Tue May 10 11:38:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 12844900 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFD2AC433F5 for ; Tue, 10 May 2022 11:38:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 173436B0072; Tue, 10 May 2022 07:38:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 124016B0073; Tue, 10 May 2022 07:38:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F05576B0074; Tue, 10 May 2022 07:38:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id DF3C86B0072 for ; Tue, 10 May 2022 07:38:44 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id AEF8681685 for ; Tue, 10 May 2022 11:38:44 +0000 (UTC) X-FDA: 79449636168.05.DDC2564 Received: from mail-pg1-f181.google.com (mail-pg1-f181.google.com [209.85.215.181]) by imf28.hostedemail.com (Postfix) with ESMTP id A34EBC0091 for ; Tue, 10 May 2022 11:38:24 +0000 (UTC) Received: by mail-pg1-f181.google.com with SMTP id 7so14395024pga.12 for ; Tue, 10 May 2022 04:38:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=YNlY8WkQxosiDZoDlklmubpls2PsalAORRWxYgT6bec=; b=0zvsz6KLTSBegmPoUHebKUfBQrcIOfRCv4H+tzjrwHUniWEZ4rpdqpEcSXU2x9regd uNmuIIGjZBCwALLywwLF54cySGhiF20MGapv4OEBfq2AC6Vkp0tVwI0fUq9tl8lxdHg9 RUUeI7caEP7DpKBzVlMJcJlhHNddUYrEY31vXUNgjG3kw995BeCecZNNRtHYF6MXrKRt DQkvhOXAppdqujUKwHPsYEHYCshMYaRtwoohBidW3lyTZgHYalycRe8WbQclukBoS/Ns 71I2m4jy86sL+RMVSS06jxNVLAamEMD3W2VdPd/W3ZRrAA8gMrkmp8GsmHhG4l4/zhpa 3ZFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=YNlY8WkQxosiDZoDlklmubpls2PsalAORRWxYgT6bec=; b=8LSbNW1/ecbd43i88lN7JODjnRg/rdHNFyLG9vunsXqSTBRoGvDd1PrO8XM9Y4sNOA GdnPqRA97Gv9YueouYM/3wIHLRbDRRiC4zUWFeeNbvxTC/3RrM6O9LkLhsG92c5AXTrV Hgurd+FIT98rL96K9XB2J+o51sOPutgm1s66vZP5DYCvNdCVXK+L2ufScZDZsRiFwf9b aBLVNgddpDrKbXOoVpozk8iVTOC0J9tPpEtIUn3A3cH8vj9Er791udiehyUx5FqRWeRC bQiLsD4QH/0RlF9ysjWR9ybITso/mPHIE/wQQ4V4e9GC7Kb6/fOf6Qq0xfK9E5VbGTZu 1MSg== X-Gm-Message-State: AOAM533VvDlAJ9KwmewGWQX76x9BlumSzgF1h+FiC8NPHmCF0vHp5RAh xM1Jle648zuQRk/9kl8gayMXog== X-Google-Smtp-Source: ABdhPJwKxNzXKGFAX8K6EQTcdh5UExtVeQxDPYC/vGaXe9dniRG50dOLEj10a4THOHQ7Kvb4Wtz7mQ== X-Received: by 2002:a63:89c7:0:b0:3da:ee16:c84 with SMTP id v190-20020a6389c7000000b003daee160c84mr2066328pgd.320.1652182722547; Tue, 10 May 2022 04:38:42 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.242]) by smtp.gmail.com with ESMTPSA id bj7-20020a170902850700b0015e8d4eb20bsm1800645plb.85.2022.05.10.04.38.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 May 2022 04:38:41 -0700 (PDT) From: Qi Zheng To: akinobu.mita@gmail.com, akpm@linux-foundation.org, vbabka@suse.cz, gregkh@linuxfoundation.org, jirislaby@kernel.org, rostedt@goodmis.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Qi Zheng Subject: [PATCH 1/2] mm: fix missing handler for __GFP_NOWARN Date: Tue, 10 May 2022 19:38:08 +0800 Message-Id: <20220510113809.80626-1-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) MIME-Version: 1.0 X-Stat-Signature: hzjndd9751ku3gar71z77gh7punoqeew X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: A34EBC0091 X-Rspam-User: Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=0zvsz6KL; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf28.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.215.181 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com X-HE-Tag: 1652182704-465704 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We expect no warnings to be issued when we specify __GFP_NOWARN, but currently in paths like alloc_pages() and kmalloc(), there are still some warnings printed, fix it. Signed-off-by: Qi Zheng --- include/linux/fault-inject.h | 2 ++ lib/fault-inject.c | 3 +++ mm/failslab.c | 3 +++ mm/internal.h | 11 +++++++++++ mm/page_alloc.c | 22 ++++++++++++---------- 5 files changed, 31 insertions(+), 10 deletions(-) diff --git a/include/linux/fault-inject.h b/include/linux/fault-inject.h index 2d04f6448cde..9f6e25467844 100644 --- a/include/linux/fault-inject.h +++ b/include/linux/fault-inject.h @@ -20,6 +20,7 @@ struct fault_attr { atomic_t space; unsigned long verbose; bool task_filter; + bool no_warn; unsigned long stacktrace_depth; unsigned long require_start; unsigned long require_end; @@ -39,6 +40,7 @@ struct fault_attr { .ratelimit_state = RATELIMIT_STATE_INIT_DISABLED, \ .verbose = 2, \ .dname = NULL, \ + .no_warn = false, \ } #define DECLARE_FAULT_ATTR(name) struct fault_attr name = FAULT_ATTR_INITIALIZER diff --git a/lib/fault-inject.c b/lib/fault-inject.c index ce12621b4275..423784d9c058 100644 --- a/lib/fault-inject.c +++ b/lib/fault-inject.c @@ -41,6 +41,9 @@ EXPORT_SYMBOL_GPL(setup_fault_attr); static void fail_dump(struct fault_attr *attr) { + if (attr->no_warn) + return; + if (attr->verbose > 0 && __ratelimit(&attr->ratelimit_state)) { printk(KERN_NOTICE "FAULT_INJECTION: forcing a failure.\n" "name %pd, interval %lu, probability %lu, " diff --git a/mm/failslab.c b/mm/failslab.c index f92fed91ac23..58df9789f1d2 100644 --- a/mm/failslab.c +++ b/mm/failslab.c @@ -30,6 +30,9 @@ bool __should_failslab(struct kmem_cache *s, gfp_t gfpflags) if (failslab.cache_filter && !(s->flags & SLAB_FAILSLAB)) return false; + if (gfpflags & __GFP_NOWARN) + failslab.attr.no_warn = true; + return should_fail(&failslab.attr, s->object_size); } diff --git a/mm/internal.h b/mm/internal.h index cf16280ce132..7a268fac6559 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -35,6 +35,17 @@ struct folio_batch; /* Do not use these with a slab allocator */ #define GFP_SLAB_BUG_MASK (__GFP_DMA32|__GFP_HIGHMEM|~__GFP_BITS_MASK) +#define WARN_ON_ONCE_GFP(cond, gfp) ({ \ + static bool __section(".data.once") __warned; \ + int __ret_warn_once = !!(cond); \ + \ + if (unlikely(!(gfp & __GFP_NOWARN) && __ret_warn_once && !__warned)) { \ + __warned = true; \ + WARN_ON(1); \ + } \ + unlikely(__ret_warn_once); \ +}) + void page_writeback_init(void); static inline void *folio_raw_mapping(struct folio *folio) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0e42038382c1..2bf4ce4d0e2f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3722,7 +3722,7 @@ struct page *rmqueue(struct zone *preferred_zone, * We most definitely don't want callers attempting to * allocate greater than order-1 page units with __GFP_NOFAIL. */ - WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1)); + WARN_ON_ONCE_GFP((gfp_flags & __GFP_NOFAIL) && (order > 1), gfp_flags); do { page = NULL; @@ -3799,6 +3799,9 @@ static bool __should_fail_alloc_page(gfp_t gfp_mask, unsigned int order) (gfp_mask & __GFP_DIRECT_RECLAIM)) return false; + if (gfp_mask & __GFP_NOWARN) + fail_page_alloc.attr.no_warn = true; + return should_fail(&fail_page_alloc.attr, 1 << order); } @@ -4346,7 +4349,8 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order, */ /* Exhausted what can be done so it's blame time */ - if (out_of_memory(&oc) || WARN_ON_ONCE(gfp_mask & __GFP_NOFAIL)) { + if (out_of_memory(&oc) || + WARN_ON_ONCE_GFP(gfp_mask & __GFP_NOFAIL, gfp_mask)) { *did_some_progress = 1; /* @@ -4902,8 +4906,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, * We also sanity check to catch abuse of atomic reserves being used by * callers that are not in atomic context. */ - if (WARN_ON_ONCE((gfp_mask & (__GFP_ATOMIC|__GFP_DIRECT_RECLAIM)) == - (__GFP_ATOMIC|__GFP_DIRECT_RECLAIM))) + if (WARN_ON_ONCE_GFP((gfp_mask & (__GFP_ATOMIC|__GFP_DIRECT_RECLAIM)) == + (__GFP_ATOMIC|__GFP_DIRECT_RECLAIM), gfp_mask)) gfp_mask &= ~__GFP_ATOMIC; retry_cpuset: @@ -5117,7 +5121,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, * All existing users of the __GFP_NOFAIL are blockable, so warn * of any new users that actually require GFP_NOWAIT */ - if (WARN_ON_ONCE(!can_direct_reclaim)) + if (WARN_ON_ONCE_GFP(!can_direct_reclaim, gfp_mask)) goto fail; /* @@ -5125,7 +5129,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, * because we cannot reclaim anything and only can loop waiting * for somebody to do a work for us */ - WARN_ON_ONCE(current->flags & PF_MEMALLOC); + WARN_ON_ONCE_GFP(current->flags & PF_MEMALLOC, gfp_mask); /* * non failing costly orders are a hard requirement which we @@ -5133,7 +5137,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, * so that we can identify them and convert them to something * else. */ - WARN_ON_ONCE(order > PAGE_ALLOC_COSTLY_ORDER); + WARN_ON_ONCE_GFP(order > PAGE_ALLOC_COSTLY_ORDER, gfp_mask); /* * Help non-failing allocations by giving them access to memory @@ -5379,10 +5383,8 @@ struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, * There are several places where we assume that the order value is sane * so bail out early if the request is out of bound. */ - if (unlikely(order >= MAX_ORDER)) { - WARN_ON_ONCE(!(gfp & __GFP_NOWARN)); + if (WARN_ON_ONCE_GFP(order >= MAX_ORDER, gfp)) return NULL; - } gfp &= gfp_allowed_mask; /*