From patchwork Mon Nov 22 15:32:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 12632143 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D69DC433EF for ; Mon, 22 Nov 2021 15:34:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B0FBD6B0073; Mon, 22 Nov 2021 10:33:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ABCAC6B0074; Mon, 22 Nov 2021 10:33:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 983E26B0075; Mon, 22 Nov 2021 10:33:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0162.hostedemail.com [216.40.44.162]) by kanga.kvack.org (Postfix) with ESMTP id 8AEB66B0073 for ; Mon, 22 Nov 2021 10:33:01 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 55CFE81FA9B1 for ; Mon, 22 Nov 2021 15:32:51 +0000 (UTC) X-FDA: 78836958900.12.37AEA80 Received: from mail-ed1-f42.google.com (mail-ed1-f42.google.com [209.85.208.42]) by imf21.hostedemail.com (Postfix) with ESMTP id 4012ED0369F5 for ; Mon, 22 Nov 2021 15:32:48 +0000 (UTC) Received: by mail-ed1-f42.google.com with SMTP id y12so78689197eda.12 for ; Mon, 22 Nov 2021 07:32:50 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XhNo8C5VhREj9v7bG8TFdKTfEqQRTCL9W5KEynh9aDo=; b=Med7aO7o7aGan265Va4x5nwST5nHYX7CWdcdbcVQMOCK7Mn2sgegDbX6k7HTeYtfBq aeUWijaIx4QEKLWS+V2h6v3FNG55WpppMcAcECYqT3HL7vCQ1zt2gHLzwbwn4M/TyVWc CQuww9NpwqSMpDkOa53odHvYgHz0HNRNNQrlyADTG/hSX1e7jVDAj6W3CP6gG56giMN/ w62Dxc9lNFK+NvM3Z+s0ydWc3VCsumbepmKznRBqhlQ8/tsiVRDjmdyPHTmQTurVn1CB PQdRvQs/k9gvSX/gC7HclSO6LPyf9qacHsCrhdl+uEScnv3GNtkgv3xlyUxvFv/lYRFT nB/w== X-Gm-Message-State: AOAM531OCvG/bSSBgKs3eIaRXpcMZahkNTEkc/NSjfj9w0Ufo3s3Ykq8 QUdXCbxU/FFBoIjNaK0R51w= X-Google-Smtp-Source: ABdhPJxBFqIXl30IalHLlxGPJUoekpcMvT3WF//HONPA1J1pzrEdOxtNc1nx/4+x08fBLVojY8iYjg== X-Received: by 2002:a17:907:9802:: with SMTP id ji2mr39237198ejc.352.1637595168343; Mon, 22 Nov 2021 07:32:48 -0800 (PST) Received: from localhost.localdomain (ip-85-160-4-65.eurotel.cz. [85.160.4.65]) by smtp.gmail.com with ESMTPSA id q7sm4247757edr.9.2021.11.22.07.32.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Nov 2021 07:32:47 -0800 (PST) From: Michal Hocko To: Andrew Morton Cc: Dave Chinner , Neil Brown , Christoph Hellwig , Uladzislau Rezki , , , LKML , Ilya Dryomov , Jeff Layton , Michal Hocko Subject: [PATCH v2 2/4] mm/vmalloc: add support for __GFP_NOFAIL Date: Mon, 22 Nov 2021 16:32:31 +0100 Message-Id: <20211122153233.9924-3-mhocko@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211122153233.9924-1-mhocko@kernel.org> References: <20211122153233.9924-1-mhocko@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 4012ED0369F5 X-Stat-Signature: 5td4ndf83yppdnqihm9jde31smqxen79 Authentication-Results: imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of mstsxfx@gmail.com designates 209.85.208.42 as permitted sender) smtp.mailfrom=mstsxfx@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none) X-HE-Tag: 1637595168-437219 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Michal Hocko Dave Chinner has mentioned that some of the xfs code would benefit from kvmalloc support for __GFP_NOFAIL because they have allocations that cannot fail and they do not fit into a single page. The large part of the vmalloc implementation already complies with the given gfp flags so there is no work for those to be done. The area and page table allocations are an exception to that. Implement a retry loop for those. Add a short sleep before retrying. 1 jiffy is a completely random timeout. Ideally the retry would wait for an explicit event - e.g. a change to the vmalloc space change if the failure was caused by the space fragmentation or depletion. But there are multiple different reasons to retry and this could become much more complex. Keep the retry simple for now and just sleep to prevent from hogging CPUs. Signed-off-by: Michal Hocko Acked-by: Vlastimil Babka --- mm/vmalloc.c | 22 +++++++++++++++++----- 1 file changed, 17 insertions(+), 5 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 17ca7001de1f..b6aed4f94a85 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2844,6 +2844,8 @@ vm_area_alloc_pages(gfp_t gfp, int nid, * more permissive. */ if (!order) { + gfp_t bulk_gfp = gfp & ~__GFP_NOFAIL; + while (nr_allocated < nr_pages) { unsigned int nr, nr_pages_request; @@ -2861,12 +2863,12 @@ vm_area_alloc_pages(gfp_t gfp, int nid, * but mempolcy want to alloc memory by interleaving. */ if (IS_ENABLED(CONFIG_NUMA) && nid == NUMA_NO_NODE) - nr = alloc_pages_bulk_array_mempolicy(gfp, + nr = alloc_pages_bulk_array_mempolicy(bulk_gfp, nr_pages_request, pages + nr_allocated); else - nr = alloc_pages_bulk_array_node(gfp, nid, + nr = alloc_pages_bulk_array_node(bulk_gfp, nid, nr_pages_request, pages + nr_allocated); @@ -2921,6 +2923,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, { const gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO; const gfp_t orig_gfp_mask = gfp_mask; + bool nofail = gfp_mask & __GFP_NOFAIL; unsigned long addr = (unsigned long)area->addr; unsigned long size = get_vm_area_size(area); unsigned long array_size; @@ -2978,8 +2981,12 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, else if ((gfp_mask & (__GFP_FS | __GFP_IO)) == 0) flags = memalloc_noio_save(); - ret = vmap_pages_range(addr, addr + size, prot, area->pages, + do { + ret = vmap_pages_range(addr, addr + size, prot, area->pages, page_shift); + if (nofail && (ret < 0)) + schedule_timeout_uninterruptible(1); + } while (nofail && (ret < 0)); if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO) memalloc_nofs_restore(flags); @@ -3074,9 +3081,14 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, VM_UNINITIALIZED | vm_flags, start, end, node, gfp_mask, caller); if (!area) { + bool nofail = gfp_mask & __GFP_NOFAIL; warn_alloc(gfp_mask, NULL, - "vmalloc error: size %lu, vm_struct allocation failed", - real_size); + "vmalloc error: size %lu, vm_struct allocation failed%s", + real_size, (nofail) ? ". Retrying." : ""); + if (nofail) { + schedule_timeout_uninterruptible(1); + goto again; + } goto fail; }