From patchwork Mon Oct 25 15:02:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 12582069 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD6A7C43217 for ; Mon, 25 Oct 2021 15:03:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9280D60F92 for ; Mon, 25 Oct 2021 15:03:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9280D60F92 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 2F3AC94000A; Mon, 25 Oct 2021 11:03:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2A2AD940007; Mon, 25 Oct 2021 11:03:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 16ABA94000A; Mon, 25 Oct 2021 11:03:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0237.hostedemail.com [216.40.44.237]) by kanga.kvack.org (Postfix) with ESMTP id 09762940007 for ; Mon, 25 Oct 2021 11:03:53 -0400 (EDT) Received: from smtpin33.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id BAE8A3262B for ; Mon, 25 Oct 2021 15:03:52 +0000 (UTC) X-FDA: 78735279504.33.B64E9BF Received: from mail-ed1-f51.google.com (mail-ed1-f51.google.com [209.85.208.51]) by imf07.hostedemail.com (Postfix) with ESMTP id 628471000098 for ; Mon, 25 Oct 2021 15:03:52 +0000 (UTC) Received: by mail-ed1-f51.google.com with SMTP id u13so657631edy.10 for ; Mon, 25 Oct 2021 08:03:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oWosBj9OWZSkL3mtoiOjTWkn1t44VoNIBaiKKWhE4fA=; b=1NUkHRVvqjxieZLE6vg+02fh1CzgtGjxBub5j8UzGlh3Wd1nTok7amb0naHI049vLa sH/5CoiHGmFH4JfN/4d0dSISBTRTM89nxg2NMMD0GCiEbV6jPxCPX9iSPWfUR+mh8Tt3 QO1lfkNxon6lq2U9e4/NYx5BGE5HQc9QxKQWDcLlNut3okDscZ5uiYxNV+edaBtOcKp+ nL4bcc1GVz3cdDJ4HkLqnErLp/r+UtlgrZlWe6Cft5fCoWP9p2WN06CPeWV0eN1lHI/Q m5run114K0ZW50k37P/WnLKgIy6QrTjAlyu3mxBG+HCGCen7SdUEm4s/Bp4oQ6ZWHGYA Idvg== X-Gm-Message-State: AOAM533o73j1XNniubXlBJhHTdnEhdJ/B8SeLxe42VjWkeT/M3GGmLHF Czfsl8WADwiWkcRvttF57G1crgSQ77o= X-Google-Smtp-Source: ABdhPJyrqaKxrOj3aYHQIaLJBPjOPC0pTflJSf1TlN/flrqHe8ZJhLlPFUuDDm5YEqSqQrVQqs1BmQ== X-Received: by 2002:a17:907:da3:: with SMTP id go35mr913924ejc.556.1635174158487; Mon, 25 Oct 2021 08:02:38 -0700 (PDT) Received: from localhost.localdomain (ip-85-160-34-175.eurotel.cz. [85.160.34.175]) by smtp.gmail.com with ESMTPSA id u23sm9098221edr.97.2021.10.25.08.02.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Oct 2021 08:02:37 -0700 (PDT) From: Michal Hocko To: Cc: Dave Chinner , Neil Brown , Andrew Morton , Christoph Hellwig , Uladzislau Rezki , , LKML , Ilya Dryomov , Jeff Layton , Michal Hocko Subject: [PATCH 1/4] mm/vmalloc: alloc GFP_NO{FS,IO} for vmalloc Date: Mon, 25 Oct 2021 17:02:20 +0200 Message-Id: <20211025150223.13621-2-mhocko@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211025150223.13621-1-mhocko@kernel.org> References: <20211025150223.13621-1-mhocko@kernel.org> MIME-Version: 1.0 X-Stat-Signature: m3a91nqmm68izrirnhgk8d3xu8nmk4jj Authentication-Results: imf07.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none); spf=pass (imf07.hostedemail.com: domain of mstsxfx@gmail.com designates 209.85.208.51 as permitted sender) smtp.mailfrom=mstsxfx@gmail.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 628471000098 X-HE-Tag: 1635174232-462489 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Michal Hocko vmalloc historically hasn't supported GFP_NO{FS,IO} requests because page table allocations do not support externally provided gfp mask and performed GFP_KERNEL like allocations. Since few years we have scope (memalloc_no{fs,io}_{save,restore}) APIs to enforce NOFS and NOIO constrains implicitly to all allocators within the scope. There was a hope that those scopes would be defined on a higher level when the reclaim recursion boundary starts/stops (e.g. when a lock required during the memory reclaim is required etc.). It seems that not all NOFS/NOIO users have adopted this approach and instead they have taken a workaround approach to wrap a single [k]vmalloc allocation by a scope API. These workarounds do not serve the purpose of a better reclaim recursion documentation and reduction of explicit GFP_NO{FS,IO} usege so let's just provide them with the semantic they are asking for without a need for workarounds. Add support for GFP_NOFS and GFP_NOIO to vmalloc directly. All internal allocations already comply with the given gfp_mask. The only current exception is vmap_pages_range which maps kernel page tables. Infer the proper scope API based on the given gfp mask. Signed-off-by: Michal Hocko --- mm/vmalloc.c | 22 ++++++++++++++++++++-- 1 file changed, 20 insertions(+), 2 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index d77830ff604c..c6cc77d2f366 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2889,6 +2889,8 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, unsigned long array_size; unsigned int nr_small_pages = size >> PAGE_SHIFT; unsigned int page_order; + unsigned int flags; + int ret; array_size = (unsigned long)nr_small_pages * sizeof(struct page *); gfp_mask |= __GFP_NOWARN; @@ -2930,8 +2932,24 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, goto fail; } - if (vmap_pages_range(addr, addr + size, prot, area->pages, - page_shift) < 0) { + /* + * page tables allocations ignore external gfp mask, enforce it + * by the scope API + */ + if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO) + flags = memalloc_nofs_save(); + else if ((gfp_mask & (__GFP_FS | __GFP_IO)) == 0) + flags = memalloc_noio_save(); + + ret = vmap_pages_range(addr, addr + size, prot, area->pages, + page_shift); + + if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO) + memalloc_nofs_restore(flags); + else if ((gfp_mask & (__GFP_FS | __GFP_IO)) == 0) + memalloc_noio_restore(flags); + + if (ret < 0) { warn_alloc(gfp_mask, NULL, "vmalloc error: size %lu, failed to map pages", area->nr_pages * PAGE_SIZE); From patchwork Mon Oct 25 15:02:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 12582071 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F8A0C433EF for ; Mon, 25 Oct 2021 15:03:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1453A60D43 for ; Mon, 25 Oct 2021 15:03:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1453A60D43 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id A479D94000B; Mon, 25 Oct 2021 11:03:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9F6DC940007; Mon, 25 Oct 2021 11:03:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8BDF994000B; Mon, 25 Oct 2021 11:03:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0028.hostedemail.com [216.40.44.28]) by kanga.kvack.org (Postfix) with ESMTP id 77402940007 for ; Mon, 25 Oct 2021 11:03:54 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 37FF2180373CB for ; Mon, 25 Oct 2021 15:03:54 +0000 (UTC) X-FDA: 78735279588.05.138073F Received: from mail-ed1-f44.google.com (mail-ed1-f44.google.com [209.85.208.44]) by imf28.hostedemail.com (Postfix) with ESMTP id D0405900009E for ; Mon, 25 Oct 2021 15:03:53 +0000 (UTC) Received: by mail-ed1-f44.google.com with SMTP id s1so586689edd.3 for ; Mon, 25 Oct 2021 08:03:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=is5ZDC1BahE5nkVkXn47Hcgm/HWZp8+0MbF9wxx8kgU=; b=I//b+V54FzMNVMnEhJnyy99vxxpz7pVQDMCOCQrcBcPTUDvThCNhgn7vWlOyqIWbWt WiSkEzTXGt2PNUUqxS6lNx6DeUNnsBUfuKKh5FXcmO7Z20V9bybVUmP6t2HJj2omqbNN slRzmMktLiHeMLltgGoaIODzJNVRYbQYd2ZGUsviiKV6QPZoOixxHY7rX0/gBnX8cClv CUPnamLjns2Ya8LIox5B8rVBqFzgl5gKLWI/0xBqjCiXL383te27u72v/tO9J9beJFdJ 709INjnTK9gzFKCkUOXiRbOcYvEDctN42qvTnOgAHtpl2vK960ZdXiQuZfDQnrG3qhzA bn4A== X-Gm-Message-State: AOAM533TkRd8GXv4x5LynOX2ysZFCuj/cKxvd5K2ooFsvCLuyfWMu055 8Q/ebaV2WKcmfvmMR4PUk+Vdlcq8TfI= X-Google-Smtp-Source: ABdhPJwwVcroxT7QraFpGhjr1v8G3BYv6JBUob+3acdxXiP1C2i0nOfZ2f/Nud8KWLUBV9luq/Yc4A== X-Received: by 2002:a05:6402:11d4:: with SMTP id j20mr27327097edw.267.1635174160482; Mon, 25 Oct 2021 08:02:40 -0700 (PDT) Received: from localhost.localdomain (ip-85-160-34-175.eurotel.cz. [85.160.34.175]) by smtp.gmail.com with ESMTPSA id u23sm9098221edr.97.2021.10.25.08.02.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Oct 2021 08:02:39 -0700 (PDT) From: Michal Hocko To: Cc: Dave Chinner , Neil Brown , Andrew Morton , Christoph Hellwig , Uladzislau Rezki , , LKML , Ilya Dryomov , Jeff Layton , Michal Hocko Subject: [PATCH 2/4] mm/vmalloc: add support for __GFP_NOFAIL Date: Mon, 25 Oct 2021 17:02:21 +0200 Message-Id: <20211025150223.13621-3-mhocko@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211025150223.13621-1-mhocko@kernel.org> References: <20211025150223.13621-1-mhocko@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: D0405900009E X-Stat-Signature: 3ziu3kzdnkm5dci9dniqfuejct6g66ad Authentication-Results: imf28.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none); spf=pass (imf28.hostedemail.com: domain of mstsxfx@gmail.com designates 209.85.208.44 as permitted sender) smtp.mailfrom=mstsxfx@gmail.com X-HE-Tag: 1635174233-795129 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Michal Hocko Dave Chinner has mentioned that some of the xfs code would benefit from kvmalloc support for __GFP_NOFAIL because they have allocations that cannot fail and they do not fit into a single page. The larg part of the vmalloc implementation already complies with the given gfp flags so there is no work for those to be done. The area and page table allocations are an exception to that. Implement a retry loop for those. Add a short sleep before retrying. 1 jiffy is a completely random timeout. Ideally the retry would wait for an explicit event - e.g. a change to the vmalloc space change if the failure was caused by the space fragmentation or depletion. But there are multiple different reasons to retry and this could become much more complex. Keep the retry simple for now and just sleep to prevent from hogging CPUs. Signed-off-by: Michal Hocko --- mm/vmalloc.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index c6cc77d2f366..602649919a9d 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2941,8 +2941,12 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, else if ((gfp_mask & (__GFP_FS | __GFP_IO)) == 0) flags = memalloc_noio_save(); - ret = vmap_pages_range(addr, addr + size, prot, area->pages, + do { + ret = vmap_pages_range(addr, addr + size, prot, area->pages, page_shift); + if (ret < 0) + schedule_timeout_uninterruptible(1); + } while ((gfp_mask & __GFP_NOFAIL) && (ret < 0)); if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO) memalloc_nofs_restore(flags); @@ -3032,6 +3036,10 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, warn_alloc(gfp_mask, NULL, "vmalloc error: size %lu, vm_struct allocation failed", real_size); + if (gfp_mask & __GFP_NOFAIL) { + schedule_timeout_uninterruptible(1); + goto again; + } goto fail; } From patchwork Mon Oct 25 15:02:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 12582065 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A229EC433F5 for ; Mon, 25 Oct 2021 15:03:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 293BC60F0F for ; Mon, 25 Oct 2021 15:03:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 293BC60F0F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 67681940008; Mon, 25 Oct 2021 11:03:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 62570940007; Mon, 25 Oct 2021 11:03:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5147B940008; Mon, 25 Oct 2021 11:03:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0221.hostedemail.com [216.40.44.221]) by kanga.kvack.org (Postfix) with ESMTP id 41D74940007 for ; Mon, 25 Oct 2021 11:03:37 -0400 (EDT) Received: from smtpin39.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D5B6B180357EA for ; Mon, 25 Oct 2021 15:03:36 +0000 (UTC) X-FDA: 78735278832.39.9EB1482 Received: from mail-ed1-f52.google.com (mail-ed1-f52.google.com [209.85.208.52]) by imf11.hostedemail.com (Postfix) with ESMTP id 5EEBFF0000B9 for ; Mon, 25 Oct 2021 15:03:36 +0000 (UTC) Received: by mail-ed1-f52.google.com with SMTP id z20so627977edc.13 for ; Mon, 25 Oct 2021 08:03:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1Nyn6pHF6WFrGnwPiamzO3XhnQYlE9zJJoQ3+36HnQU=; b=aj9ueGmONzBSXwd2R5ePJO6XERy/7wIlGHk/MDq0PJKT2/Z147q2+niY8YH8Q2rQri 5PD3/mr8i2yULuuh30kVcMSV04VutWqodLdvYg687vRwAm//iJx4tHoyM1FdUr01XRQv Z55mdoRczt1lBC/0fJrocIUHYcGhFjhGHDtEoPP6RUeYPZLiGnt123TPHQGxYzFBS/TC BFXo2JzKh00ZvgRUwlyrWCNoYYL1PUdjeTiSensDYdCUk3uNEv7zMQRRprPOzn9tn4iE N06IcGuQ0qzZg6NGmGJwRVfmICDtbR3THknu85enRx/nxlkXxA0e6VBg7MJvuIrHad/s LSYg== X-Gm-Message-State: AOAM531WGDGZH5tASu2U5HtojWeYxBx5HVQOsfcfFFaZL+c+TAyJW/Cp zEqn1QM5AYTxZj5gzQMlAb1Ymdxw/P8= X-Google-Smtp-Source: ABdhPJxI4FD3ukf0VFAW0c5yyigvsk8/K8td5hXew4yerhP9Ku9zWHE1t3ZhRDT1nwO0o435re+ehg== X-Received: by 2002:aa7:dbc1:: with SMTP id v1mr2472050edt.49.1635174162116; Mon, 25 Oct 2021 08:02:42 -0700 (PDT) Received: from localhost.localdomain (ip-85-160-34-175.eurotel.cz. [85.160.34.175]) by smtp.gmail.com with ESMTPSA id u23sm9098221edr.97.2021.10.25.08.02.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Oct 2021 08:02:41 -0700 (PDT) From: Michal Hocko To: Cc: Dave Chinner , Neil Brown , Andrew Morton , Christoph Hellwig , Uladzislau Rezki , , LKML , Ilya Dryomov , Jeff Layton , Michal Hocko Subject: [PATCH 3/4] mm/vmalloc: be more explicit about supported gfp flags. Date: Mon, 25 Oct 2021 17:02:22 +0200 Message-Id: <20211025150223.13621-4-mhocko@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211025150223.13621-1-mhocko@kernel.org> References: <20211025150223.13621-1-mhocko@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 5EEBFF0000B9 X-Stat-Signature: ixdncbqfiqpgophoh8qtrkxyz1fe4hht Authentication-Results: imf11.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none); spf=pass (imf11.hostedemail.com: domain of mstsxfx@gmail.com designates 209.85.208.52 as permitted sender) smtp.mailfrom=mstsxfx@gmail.com X-HE-Tag: 1635174216-343742 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Michal Hocko The core of the vmalloc allocator __vmalloc_area_node doesn't say anything about gfp mask argument. Not all gfp flags are supported though. Be more explicit about constrains. Signed-off-by: Michal Hocko --- mm/vmalloc.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 602649919a9d..2199d821c981 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2980,8 +2980,16 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, * @caller: caller's return address * * Allocate enough pages to cover @size from the page level - * allocator with @gfp_mask flags. Map them into contiguous - * kernel virtual space, using a pagetable protection of @prot. + * allocator with @gfp_mask flags. Please note that the full set of gfp + * flags are not supported. GFP_KERNEL would be a preferred allocation mode + * but GFP_NOFS and GFP_NOIO are supported as well. Zone modifiers are not + * supported. From the reclaim modifiers__GFP_DIRECT_RECLAIM is required (aka + * GFP_NOWAIT is not supported) and only __GFP_NOFAIL is supported (aka + * __GFP_NORETRY and __GFP_RETRY_MAYFAIL are not supported). + * __GFP_NOWARN can be used to suppress error messages about failures. + * + * Map them into contiguous kernel virtual space, using a pagetable + * protection of @prot. * * Return: the address of the area or %NULL on failure */ From patchwork Mon Oct 25 15:02:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 12582073 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D7ACC433FE for ; Mon, 25 Oct 2021 15:03:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B4E6F60D43 for ; Mon, 25 Oct 2021 15:03:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B4E6F60D43 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 035DD94000C; Mon, 25 Oct 2021 11:03:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id ED8C6940007; Mon, 25 Oct 2021 11:03:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D537280007; Mon, 25 Oct 2021 11:03:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0227.hostedemail.com [216.40.44.227]) by kanga.kvack.org (Postfix) with ESMTP id B6BD594000C for ; Mon, 25 Oct 2021 11:03:54 -0400 (EDT) Received: from smtpin38.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6E1B7181AF5FA for ; Mon, 25 Oct 2021 15:03:54 +0000 (UTC) X-FDA: 78735279588.38.D4CF6FC Received: from mail-ed1-f48.google.com (mail-ed1-f48.google.com [209.85.208.48]) by imf06.hostedemail.com (Postfix) with ESMTP id F2568801A8AE for ; Mon, 25 Oct 2021 15:03:53 +0000 (UTC) Received: by mail-ed1-f48.google.com with SMTP id u13so657987edy.10 for ; Mon, 25 Oct 2021 08:03:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GQ9vGBplovxkjkXfQYlY3shu8H3p0LkLkA5L8hbsn4g=; b=OJEzRrb6rmnIZPTX0Hv6vmTFlUjPK1S6aTnCjWAbmGJ2Qw5syzDRpSaZfFOMzIQmQd 0Vka4hSMCJOJL1TqFIoKOMpS0H594MSTM0czboUKgAUhuZsR0iFyM8u4WM8wPOpt5/qY Ghs59oQ1lfjIrKZExLUOSXKpdtObE5hmLdmgRupLENCKQgzqmHqAKU9KrvV448/EE/yn WvDI/CTWQ3DxIRCXb40OkW3BCGse/o8YnwYfVx9gl+puiKUM/CJH8hxkPQuxvnHSRBCX JKTLUWNI4ERvB0Ym1kSqUBo4pvvuywlaq6a7Qe3MQdxbIdSEPdb3wS+NKkpKiOArfnrB V2lg== X-Gm-Message-State: AOAM530fTwLmd8kUg9X0oHnfs+z/RAetQlQ4c2PI9sH0VJmAMXtpvSS/ J7eovR8wubo8Rs1UGyqhdw6R7Capfzo= X-Google-Smtp-Source: ABdhPJxaNIALzBHC3Y0GOkTHOcN+KpFKMswPg5/L9iGrO41lUJNeoygnLUvJccONBjnDtWZ2KCZIMg== X-Received: by 2002:a05:6402:5252:: with SMTP id t18mr25960998edd.129.1635174164079; Mon, 25 Oct 2021 08:02:44 -0700 (PDT) Received: from localhost.localdomain (ip-85-160-34-175.eurotel.cz. [85.160.34.175]) by smtp.gmail.com with ESMTPSA id u23sm9098221edr.97.2021.10.25.08.02.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Oct 2021 08:02:43 -0700 (PDT) From: Michal Hocko To: Cc: Dave Chinner , Neil Brown , Andrew Morton , Christoph Hellwig , Uladzislau Rezki , , LKML , Ilya Dryomov , Jeff Layton , Michal Hocko Subject: [PATCH 4/4] mm: allow !GFP_KERNEL allocations for kvmalloc Date: Mon, 25 Oct 2021 17:02:23 +0200 Message-Id: <20211025150223.13621-5-mhocko@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211025150223.13621-1-mhocko@kernel.org> References: <20211025150223.13621-1-mhocko@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: F2568801A8AE X-Stat-Signature: bg3bpq3rfime8sa858k39cjrmtanrawg Authentication-Results: imf06.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none); spf=pass (imf06.hostedemail.com: domain of mstsxfx@gmail.com designates 209.85.208.48 as permitted sender) smtp.mailfrom=mstsxfx@gmail.com X-HE-Tag: 1635174233-996429 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Michal Hocko A support for GFP_NO{FS,IO} and __GFP_NOFAIL has been implemented by previous patches so we can allow the support for kvmalloc. This will allow some external users to simplify or completely remove their helpers. GFP_NOWAIT semantic hasn't been supported so far but it hasn't been explicitly documented so let's add a note about that. ceph_kvmalloc is the first helper to be dropped and changed to kvmalloc. Signed-off-by: Michal Hocko --- include/linux/ceph/libceph.h | 1 - mm/util.c | 15 ++++----------- net/ceph/buffer.c | 4 ++-- net/ceph/ceph_common.c | 27 --------------------------- net/ceph/crypto.c | 2 +- net/ceph/messenger.c | 2 +- net/ceph/messenger_v2.c | 2 +- net/ceph/osdmap.c | 12 ++++++------ 8 files changed, 15 insertions(+), 50 deletions(-) diff --git a/include/linux/ceph/libceph.h b/include/linux/ceph/libceph.h index 409d8c29bc4f..309acbcb5a8a 100644 --- a/include/linux/ceph/libceph.h +++ b/include/linux/ceph/libceph.h @@ -295,7 +295,6 @@ extern bool libceph_compatible(void *data); extern const char *ceph_msg_type_name(int type); extern int ceph_check_fsid(struct ceph_client *client, struct ceph_fsid *fsid); -extern void *ceph_kvmalloc(size_t size, gfp_t flags); struct fs_parameter; struct fc_log; diff --git a/mm/util.c b/mm/util.c index bacabe446906..fdec6b4b1267 100644 --- a/mm/util.c +++ b/mm/util.c @@ -549,13 +549,10 @@ EXPORT_SYMBOL(vm_mmap); * Uses kmalloc to get the memory but if the allocation fails then falls back * to the vmalloc allocator. Use kvfree for freeing the memory. * - * Reclaim modifiers - __GFP_NORETRY and __GFP_NOFAIL are not supported. + * Reclaim modifiers - __GFP_NORETRY and GFP_NOWAIT are not supported. * __GFP_RETRY_MAYFAIL is supported, and it should be used only if kmalloc is * preferable to the vmalloc fallback, due to visible performance drawbacks. * - * Please note that any use of gfp flags outside of GFP_KERNEL is careful to not - * fall back to vmalloc. - * * Return: pointer to the allocated memory of %NULL in case of failure */ void *kvmalloc_node(size_t size, gfp_t flags, int node) @@ -563,13 +560,6 @@ void *kvmalloc_node(size_t size, gfp_t flags, int node) gfp_t kmalloc_flags = flags; void *ret; - /* - * vmalloc uses GFP_KERNEL for some internal allocations (e.g page tables) - * so the given set of flags has to be compatible. - */ - if ((flags & GFP_KERNEL) != GFP_KERNEL) - return kmalloc_node(size, flags, node); - /* * We want to attempt a large physically contiguous block first because * it is less likely to fragment multiple larger blocks and therefore @@ -582,6 +572,9 @@ void *kvmalloc_node(size_t size, gfp_t flags, int node) if (!(kmalloc_flags & __GFP_RETRY_MAYFAIL)) kmalloc_flags |= __GFP_NORETRY; + + /* nofail semantic is implemented by the vmalloc fallback */ + kmalloc_flags &= ~__GFP_NOFAIL; } ret = kmalloc_node(size, kmalloc_flags, node); diff --git a/net/ceph/buffer.c b/net/ceph/buffer.c index 5622763ad402..7e51f128045d 100644 --- a/net/ceph/buffer.c +++ b/net/ceph/buffer.c @@ -7,7 +7,7 @@ #include #include -#include /* for ceph_kvmalloc */ +#include /* for kvmalloc */ struct ceph_buffer *ceph_buffer_new(size_t len, gfp_t gfp) { @@ -17,7 +17,7 @@ struct ceph_buffer *ceph_buffer_new(size_t len, gfp_t gfp) if (!b) return NULL; - b->vec.iov_base = ceph_kvmalloc(len, gfp); + b->vec.iov_base = kvmalloc(len, gfp); if (!b->vec.iov_base) { kfree(b); return NULL; diff --git a/net/ceph/ceph_common.c b/net/ceph/ceph_common.c index 97d6ea763e32..9441b4a4912b 100644 --- a/net/ceph/ceph_common.c +++ b/net/ceph/ceph_common.c @@ -190,33 +190,6 @@ int ceph_compare_options(struct ceph_options *new_opt, } EXPORT_SYMBOL(ceph_compare_options); -/* - * kvmalloc() doesn't fall back to the vmalloc allocator unless flags are - * compatible with (a superset of) GFP_KERNEL. This is because while the - * actual pages are allocated with the specified flags, the page table pages - * are always allocated with GFP_KERNEL. - * - * ceph_kvmalloc() may be called with GFP_KERNEL, GFP_NOFS or GFP_NOIO. - */ -void *ceph_kvmalloc(size_t size, gfp_t flags) -{ - void *p; - - if ((flags & (__GFP_IO | __GFP_FS)) == (__GFP_IO | __GFP_FS)) { - p = kvmalloc(size, flags); - } else if ((flags & (__GFP_IO | __GFP_FS)) == __GFP_IO) { - unsigned int nofs_flag = memalloc_nofs_save(); - p = kvmalloc(size, GFP_KERNEL); - memalloc_nofs_restore(nofs_flag); - } else { - unsigned int noio_flag = memalloc_noio_save(); - p = kvmalloc(size, GFP_KERNEL); - memalloc_noio_restore(noio_flag); - } - - return p; -} - static int parse_fsid(const char *str, struct ceph_fsid *fsid) { int i = 0; diff --git a/net/ceph/crypto.c b/net/ceph/crypto.c index 92d89b331645..051d22c0e4ad 100644 --- a/net/ceph/crypto.c +++ b/net/ceph/crypto.c @@ -147,7 +147,7 @@ void ceph_crypto_key_destroy(struct ceph_crypto_key *key) static const u8 *aes_iv = (u8 *)CEPH_AES_IV; /* - * Should be used for buffers allocated with ceph_kvmalloc(). + * Should be used for buffers allocated with kvmalloc(). * Currently these are encrypt out-buffer (ceph_buffer) and decrypt * in-buffer (msg front). * diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c index 57d043b382ed..7b891be799d2 100644 --- a/net/ceph/messenger.c +++ b/net/ceph/messenger.c @@ -1920,7 +1920,7 @@ struct ceph_msg *ceph_msg_new2(int type, int front_len, int max_data_items, /* front */ if (front_len) { - m->front.iov_base = ceph_kvmalloc(front_len, flags); + m->front.iov_base = kvmalloc(front_len, flags); if (m->front.iov_base == NULL) { dout("ceph_msg_new can't allocate %d bytes\n", front_len); diff --git a/net/ceph/messenger_v2.c b/net/ceph/messenger_v2.c index cc40ce4e02fb..c4099b641b38 100644 --- a/net/ceph/messenger_v2.c +++ b/net/ceph/messenger_v2.c @@ -308,7 +308,7 @@ static void *alloc_conn_buf(struct ceph_connection *con, int len) if (WARN_ON(con->v2.conn_buf_cnt >= ARRAY_SIZE(con->v2.conn_bufs))) return NULL; - buf = ceph_kvmalloc(len, GFP_NOIO); + buf = kvmalloc(len, GFP_NOIO); if (!buf) return NULL; diff --git a/net/ceph/osdmap.c b/net/ceph/osdmap.c index 75b738083523..2823bb3cff55 100644 --- a/net/ceph/osdmap.c +++ b/net/ceph/osdmap.c @@ -980,7 +980,7 @@ static struct crush_work *alloc_workspace(const struct crush_map *c) work_size = crush_work_size(c, CEPH_PG_MAX_SIZE); dout("%s work_size %zu bytes\n", __func__, work_size); - work = ceph_kvmalloc(work_size, GFP_NOIO); + work = kvmalloc(work_size, GFP_NOIO); if (!work) return NULL; @@ -1190,9 +1190,9 @@ static int osdmap_set_max_osd(struct ceph_osdmap *map, u32 max) if (max == map->max_osd) return 0; - state = ceph_kvmalloc(array_size(max, sizeof(*state)), GFP_NOFS); - weight = ceph_kvmalloc(array_size(max, sizeof(*weight)), GFP_NOFS); - addr = ceph_kvmalloc(array_size(max, sizeof(*addr)), GFP_NOFS); + state = kvmalloc(array_size(max, sizeof(*state)), GFP_NOFS); + weight = kvmalloc(array_size(max, sizeof(*weight)), GFP_NOFS); + addr = kvmalloc(array_size(max, sizeof(*addr)), GFP_NOFS); if (!state || !weight || !addr) { kvfree(state); kvfree(weight); @@ -1222,7 +1222,7 @@ static int osdmap_set_max_osd(struct ceph_osdmap *map, u32 max) if (map->osd_primary_affinity) { u32 *affinity; - affinity = ceph_kvmalloc(array_size(max, sizeof(*affinity)), + affinity = kvmalloc(array_size(max, sizeof(*affinity)), GFP_NOFS); if (!affinity) return -ENOMEM; @@ -1503,7 +1503,7 @@ static int set_primary_affinity(struct ceph_osdmap *map, int osd, u32 aff) if (!map->osd_primary_affinity) { int i; - map->osd_primary_affinity = ceph_kvmalloc( + map->osd_primary_affinity = kvmalloc( array_size(map->max_osd, sizeof(*map->osd_primary_affinity)), GFP_NOFS); if (!map->osd_primary_affinity)