From patchwork Wed Oct 2 23:03:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Rientjes X-Patchwork-Id: 11171887 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9062213BD for ; Wed, 2 Oct 2019 23:03:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 583DF222C7 for ; Wed, 2 Oct 2019 23:03:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Ell4lfoP" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 583DF222C7 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6F93A6B0003; Wed, 2 Oct 2019 19:03:08 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 680156B0006; Wed, 2 Oct 2019 19:03:08 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 520D76B0007; Wed, 2 Oct 2019 19:03:08 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0187.hostedemail.com [216.40.44.187]) by kanga.kvack.org (Postfix) with ESMTP id 2AA276B0003 for ; Wed, 2 Oct 2019 19:03:08 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id BDAF7181AC9AE for ; Wed, 2 Oct 2019 23:03:07 +0000 (UTC) X-FDA: 76000372014.11.care78_38589afbb8a0f X-Spam-Summary: 50,0,0,6822b9e73e19925f,d41d8cd98f00b204,rientjes@google.com,:mike.kravetz@oracle.com:mhocko@kernel.org:vbabka@suse.cz:torvalds@linux-foundation.org:aarcange@redhat.com:akpm@linux-foundation.org:mgorman@suse.de:kirill@shutemov.name:linux-kernel@vger.kernel.org:,RULES_HIT:41:196:355:379:800:960:967:973:988:989:1260:1277:1313:1314:1345:1437:1516:1518:1535:1542:1593:1594:1622:1711:1730:1747:1777:1792:2198:2199:2393:2525:2553:2560:2563:2682:2685:2731:2859:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3152:3354:3865:3866:3867:3868:3870:3871:3872:3873:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4321:5007:6119:6261:6653:6671:7875:7903:9025:10004:10400:10450:10455:11026:11232:11473:11658:11854:11914:12043:12295:12297:12438:12517:12519:12555:12986:13007:13439:13869:14096:14097:14181:14659:14721:19904:19999:21080:21444:21627:21795:30003:30005:30051:30054:30060:30064:30090:30091,0,RBL:209.85.215.193:@google.com:.lbl8.mailshell.net-62.18 .175.100 X-HE-Tag: care78_38589afbb8a0f X-Filterd-Recvd-Size: 5578 Received: from mail-pg1-f193.google.com (mail-pg1-f193.google.com [209.85.215.193]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Wed, 2 Oct 2019 23:03:07 +0000 (UTC) Received: by mail-pg1-f193.google.com with SMTP id z12so502832pgp.9 for ; Wed, 02 Oct 2019 16:03:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:user-agent:mime-version; bh=Kx1lQ6Xuj0lo4gXMSS/qxuv9enZ5ijN/jq6CI+qCygs=; b=Ell4lfoPjQ/U64uUNXZ1rWWz41pBKUiRnxxBUmzrplg1vOXSoFHUHs+cLMF4fl/2/I qhtxDkZZPYsZYz490rSFFz3RVb0eTfQV1FQChEZ7Rw7ohYw3TzvTpSl0c8t/nOTC2d8/ qmZH0MYjsv4H4kGXvKG82vqQ0zkgVjGVe2IItfZzV5A2UwtbFV2v6TL7NmdILOV7z3mV PQv37bFpgeDQjoLjPcgLR4RQ62LjyU5mCxEi5bR8pOxOQdlFAENR7ZiqMZKdrdhv5C9f 6CoapBroe9c3nzmT2bW0WiBc9QAIdHPlTLLRkEtc6Eu+346qX/M6s7tXNFGc+JwITWhR PiGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:user-agent :mime-version; bh=Kx1lQ6Xuj0lo4gXMSS/qxuv9enZ5ijN/jq6CI+qCygs=; b=DCk+oA3TJApqPm7XW1TxLgoo/enkdT6O/lnOcq1jw6rp6oueYzaqzD7Ud3RSu0Gk8h jXdxpWCbGqnxW6G28uOk0wADEwgHie4fr+xf4lmMxUXVx0P2UcOEI2tBdRHCn2AhWpYq +G7bfQt2WYye4p8pbrc/lEzZjN96ekRfce1EL51JUMPdJfHyiZGuQP/5ZgXYPsn7ueQr 94bT9mv60tFqwhAM5Gba9k9z2V8dn/RGR519vGBTRSA9TW3NzLKjSUEev8ScapThuebA bevwdvQ+n009ezqeqWDq++LJn+wCPdUeHOpYgnfIdRsrf3BqRZBz0ici7bULELvTYZjv iTPw== X-Gm-Message-State: APjAAAUV3XKiYNyJdBmH4o5+9ywOubb6MyQBiofQd3RqO6r/YCtnt7OJ G/vohToh5Gpj6PDL0o86fw756Q== X-Google-Smtp-Source: APXvYqze6T/pvyncy2GIGKjzFCBiih1dWCjaT09cVEUfiHqI5i28yaqW2t6FRRhXydBoOryGEzmLnQ== X-Received: by 2002:a17:90a:2301:: with SMTP id f1mr7086902pje.121.1570057385215; Wed, 02 Oct 2019 16:03:05 -0700 (PDT) Received: from [2620:15c:17:3:3a5:23a7:5e32:4598] ([2620:15c:17:3:3a5:23a7:5e32:4598]) by smtp.gmail.com with ESMTPSA id s1sm317223pjs.31.2019.10.02.16.03.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Oct 2019 16:03:04 -0700 (PDT) Date: Wed, 2 Oct 2019 16:03:03 -0700 (PDT) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Mike Kravetz , Michal Hocko cc: Vlastimil Babka , Linus Torvalds , Andrea Arcangeli , Andrew Morton , Mel Gorman , "Kirill A. Shutemov" , Linux Kernel Mailing List , Linux-MM Subject: [rfc] mm, hugetlb: allow hugepage allocations to excessively reclaim Message-ID: User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hugetlb allocations use __GFP_RETRY_MAYFAIL to aggressively attempt to get hugepages that the user needs. Commit b39d0ee2632d ("mm, page_alloc: avoid expensive reclaim when compaction may not succeed") intends to improve allocator behind for thp allocations to prevent excessive amounts of reclaim especially when constrained to a single node. Since hugetlb allocations have explicitly preferred to loop and do reclaim and compaction, exempt them from this new behavior at least for the time being. It is not shown that hugetlb allocation success rate has been impacted by commit b39d0ee2632d but hugetlb allocations are admittedly beyond the scope of what the patch is intended to address (thp allocations). Cc: Mike Kravetz Signed-off-by: David Rientjes --- Mike, you eluded that you may want to opt hugetlbfs out of this for the time being in https://marc.info/?l=linux-kernel&m=156771690024533 -- not sure if you want to allow this excessive amount of reclaim for hugetlb allocations or not given the swap storms Andrea has shown is possible (and nr_hugepages_mempolicy does exist), but hugetlbfs was not part of the problem we are trying to address here so no objection to opting it out. You might want to consider how expensive hugetlb allocations can become and disruptive to the system if it does not yield additional hugepages, but that can be done at any time later as a general improvement rather than part of a series aimed at thp. mm/page_alloc.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4467,12 +4467,14 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, if (page) goto got_pg; - if (order >= pageblock_order && (gfp_mask & __GFP_IO)) { + if (order >= pageblock_order && (gfp_mask & __GFP_IO) && + !(gfp_mask & __GFP_RETRY_MAYFAIL)) { /* * If allocating entire pageblock(s) and compaction * failed because all zones are below low watermarks * or is prohibited because it recently failed at this - * order, fail immediately. + * order, fail immediately unless the allocator has + * requested compaction and reclaim retry. * * Reclaim is * - potentially very expensive because zones are far