From patchwork Mon Nov 22 15:32:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 12632141 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFF09C433EF for ; Mon, 22 Nov 2021 15:33:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 233B86B0072; Mon, 22 Nov 2021 10:33:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1E2E56B0073; Mon, 22 Nov 2021 10:33:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0ABF26B0074; Mon, 22 Nov 2021 10:33:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0068.hostedemail.com [216.40.44.68]) by kanga.kvack.org (Postfix) with ESMTP id F10366B0072 for ; Mon, 22 Nov 2021 10:33:00 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id B996789D2B for ; Mon, 22 Nov 2021 15:32:50 +0000 (UTC) X-FDA: 78836958942.29.BDE90A8 Received: from mail-ed1-f41.google.com (mail-ed1-f41.google.com [209.85.208.41]) by imf26.hostedemail.com (Postfix) with ESMTP id 8D1FE20019EC for ; Mon, 22 Nov 2021 15:32:49 +0000 (UTC) Received: by mail-ed1-f41.google.com with SMTP id y13so78559364edd.13 for ; Mon, 22 Nov 2021 07:32:50 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zzYIrpFjBHpBF/zYBCBZI1LNzRlLubcW/OTa711WwFs=; b=d3XYUDhaPU562QsVwa1kE08hqxbEndmQY1lAysMgccfdAcxnrgVNh/z9VkrrIwyvGI 9V4e5jLs67wKCOf9UPQ2xI1GCwj6XCPgMnNA7nT4zvZn9vHcU4nRTBrrqpdnrwaZ4eV4 gChc/kOz6TIi2lEV7Qo7bAlp7P5lBhZFPYKF1alewWpXh3lHgh8uMSKV7uFLEwni6E1b /HKWnWvqngNlmBecXBRbmjReyCd1EE84qwWt2k4pN1gS5dK5FoPYuwvgQQA9sJPJTnWI ODU40RK/hik/3pubZTpdZ3ZwqWsh58F2svVnqPa+KgeBC5I8dvIm3b88/VzrE9m8zmxc otww== X-Gm-Message-State: AOAM530oaX9M9hTsstSblH1KIJ6lXpyAD9tCeGpPZRao3Kpctl1oajR6 h+YHtPBBsfM4pmggys20xfk= X-Google-Smtp-Source: ABdhPJzdKoN82zhhsiyJJx1JG38ywi+mBrmuy8XTeco3UnBKVtMBJkCdWQ8DQwOdPSfVftlXDckg1g== X-Received: by 2002:a17:907:3f18:: with SMTP id hq24mr41593437ejc.506.1637595166854; Mon, 22 Nov 2021 07:32:46 -0800 (PST) Received: from localhost.localdomain (ip-85-160-4-65.eurotel.cz. [85.160.4.65]) by smtp.gmail.com with ESMTPSA id q7sm4247757edr.9.2021.11.22.07.32.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Nov 2021 07:32:46 -0800 (PST) From: Michal Hocko To: Andrew Morton Cc: Dave Chinner , Neil Brown , Christoph Hellwig , Uladzislau Rezki , , , LKML , Ilya Dryomov , Jeff Layton , Michal Hocko Subject: [PATCH v2 1/4] mm/vmalloc: alloc GFP_NO{FS,IO} for vmalloc Date: Mon, 22 Nov 2021 16:32:30 +0100 Message-Id: <20211122153233.9924-2-mhocko@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211122153233.9924-1-mhocko@kernel.org> References: <20211122153233.9924-1-mhocko@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 8D1FE20019EC X-Stat-Signature: 6msjgfwo84jrm8jn1jawqc8noomkoy77 Authentication-Results: imf26.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none); spf=pass (imf26.hostedemail.com: domain of mstsxfx@gmail.com designates 209.85.208.41 as permitted sender) smtp.mailfrom=mstsxfx@gmail.com X-Rspamd-Server: rspam02 X-HE-Tag: 1637595169-219962 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Michal Hocko vmalloc historically hasn't supported GFP_NO{FS,IO} requests because page table allocations do not support externally provided gfp mask and performed GFP_KERNEL like allocations. Since few years we have scope (memalloc_no{fs,io}_{save,restore}) APIs to enforce NOFS and NOIO constrains implicitly to all allocators within the scope. There was a hope that those scopes would be defined on a higher level when the reclaim recursion boundary starts/stops (e.g. when a lock required during the memory reclaim is required etc.). It seems that not all NOFS/NOIO users have adopted this approach and instead they have taken a workaround approach to wrap a single [k]vmalloc allocation by a scope API. These workarounds do not serve the purpose of a better reclaim recursion documentation and reduction of explicit GFP_NO{FS,IO} usege so let's just provide them with the semantic they are asking for without a need for workarounds. Add support for GFP_NOFS and GFP_NOIO to vmalloc directly. All internal allocations already comply with the given gfp_mask. The only current exception is vmap_pages_range which maps kernel page tables. Infer the proper scope API based on the given gfp mask. Signed-off-by: Michal Hocko Reviewed-by: Uladzislau Rezki (Sony) Acked-by: Vlastimil Babka --- mm/vmalloc.c | 22 ++++++++++++++++++++-- 1 file changed, 20 insertions(+), 2 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index d2a00ad4e1dd..17ca7001de1f 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2926,6 +2926,8 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, unsigned long array_size; unsigned int nr_small_pages = size >> PAGE_SHIFT; unsigned int page_order; + unsigned int flags; + int ret; array_size = (unsigned long)nr_small_pages * sizeof(struct page *); gfp_mask |= __GFP_NOWARN; @@ -2967,8 +2969,24 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, goto fail; } - if (vmap_pages_range(addr, addr + size, prot, area->pages, - page_shift) < 0) { + /* + * page tables allocations ignore external gfp mask, enforce it + * by the scope API + */ + if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO) + flags = memalloc_nofs_save(); + else if ((gfp_mask & (__GFP_FS | __GFP_IO)) == 0) + flags = memalloc_noio_save(); + + ret = vmap_pages_range(addr, addr + size, prot, area->pages, + page_shift); + + if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO) + memalloc_nofs_restore(flags); + else if ((gfp_mask & (__GFP_FS | __GFP_IO)) == 0) + memalloc_noio_restore(flags); + + if (ret < 0) { warn_alloc(orig_gfp_mask, NULL, "vmalloc error: size %lu, failed to map pages", area->nr_pages * PAGE_SIZE);