From patchwork Wed Oct 21 08:09:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xu Yu X-Patchwork-Id: 11848511 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0482E17EC for ; Wed, 21 Oct 2020 08:09:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7FAB72227F for ; Wed, 21 Oct 2020 08:09:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7FAB72227F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 593486B0062; Wed, 21 Oct 2020 04:09:53 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 51CF26B006C; Wed, 21 Oct 2020 04:09:53 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 40C986B006E; Wed, 21 Oct 2020 04:09:53 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0076.hostedemail.com [216.40.44.76]) by kanga.kvack.org (Postfix) with ESMTP id 0A8976B0062 for ; Wed, 21 Oct 2020 04:09:52 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 4BCD7180AD811 for ; Wed, 21 Oct 2020 08:09:52 +0000 (UTC) X-FDA: 77395209024.03.turn96_34004ff27246 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 2B57428A4E9 for ; Wed, 21 Oct 2020 08:09:52 +0000 (UTC) X-Spam-Summary: 1,0,0,f01bd06da056b2c3,d41d8cd98f00b204,xuyu@linux.alibaba.com,,RULES_HIT:41:355:379:541:960:973:988:989:1260:1261:1311:1314:1345:1437:1515:1534:1543:1711:1730:1747:1777:1792:2393:2559:2562:2693:3138:3139:3140:3141:3142:3354:3622:3865:3866:3867:3868:3870:3871:5007:6119:6261:7875:7903:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12555:12683:12895:13894:14096:14181:14394:14721:21080:21324:21451:21627:21990:30012:30054,0,RBL:115.124.30.42:@linux.alibaba.com:.lbl8.mailshell.net-62.20.2.100 64.201.201.201;04yrb688t7i7y9qycsc3q5rrcgebwyc11g6p98rppsf6oxzuocie4rtifh35qje.3r1rno7kxneaigiubiexxztja7bsoauwz49d8epy4rjsw3q3tbcnwr1hk498ebu.g-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: turn96_34004ff27246 X-Filterd-Recvd-Size: 4074 Received: from out30-42.freemail.mail.aliyun.com (out30-42.freemail.mail.aliyun.com [115.124.30.42]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Wed, 21 Oct 2020 08:09:50 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R821e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04395;MF=xuyu@linux.alibaba.com;NM=1;PH=DS;RN=5;SR=0;TI=SMTPD_---0UCj.BMW_1603267785; Received: from localhost(mailfrom:xuyu@linux.alibaba.com fp:SMTPD_---0UCj.BMW_1603267785) by smtp.aliyun-inc.com(127.0.0.1); Wed, 21 Oct 2020 16:09:46 +0800 From: Xu Yu To: linux-mm@kvack.org Cc: hughd@google.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org Subject: [PATCH] mm/shmem: fix up gfpmask for shmem hugepage allocation Date: Wed, 21 Oct 2020 16:09:39 +0800 Message-Id: <11e1ead211eb7d141efa0eb75a46ee2096ee63f8.1603267572.git.xuyu@linux.alibaba.com> X-Mailer: git-send-email 2.20.1.2432.ga663e714 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, the gfpmask used in shmem_alloc_hugepage is fixed, i.e., gfp | __GFP_COMP | __GFP_NORETRY | __GFP_NOWARN, where gfp comes from inode mapping, usually GFP_HIGHUSER_MOVABLE. This will introduce direct or kswapd reclaim when fast path of shmem hugepage allocation fails, which is unexpected sometimes. This applies the effect of defrag option of anonymous hugepage to shmem hugepage too. By doing so, we can control the defrag behavior of both kinds of THP. This also explicitly adds the SHMEM_HUGE_ALWAYS case in shmem_getpage_gfp, for better code reading. Signed-off-by: Xu Yu --- mm/shmem.c | 44 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 44 insertions(+) diff --git a/mm/shmem.c b/mm/shmem.c index 537c137698f8..a0f5d02e479b 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1780,6 +1780,47 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, return error; } +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +static inline gfp_t shmem_hugepage_gfpmask_fixup(gfp_t gfp, + enum sgp_type sgp_huge) +{ + const bool vma_madvised = sgp_huge == SGP_HUGE; + + gfp |= __GFP_NOMEMALLOC; + gfp &= ~__GFP_RECLAIM; + + /* Force do synchronous compaction */ + if (shmem_huge == SHMEM_HUGE_FORCE) + return gfp | __GFP_DIRECT_RECLAIM; + + /* Always do synchronous compaction */ + if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG, &transparent_hugepage_flags)) + return gfp | __GFP_DIRECT_RECLAIM | (vma_madvised ? 0 : __GFP_NORETRY); + + /* Kick kcompactd and fail quickly */ + if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG, &transparent_hugepage_flags)) + return gfp | __GFP_KSWAPD_RECLAIM; + + /* Synchronous compaction if madvised, otherwise kick kcompactd */ + if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG, &transparent_hugepage_flags)) + return gfp | + (vma_madvised ? __GFP_DIRECT_RECLAIM : + __GFP_KSWAPD_RECLAIM); + + /* Only do synchronous compaction if madvised */ + if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG, &transparent_hugepage_flags)) + return gfp | (vma_madvised ? __GFP_DIRECT_RECLAIM : 0); + + return gfp; +} +#else +static inline gfp_t shmem_hugepage_gfpmask_fixup(gfp_t gfp, + enum sgp_type sgp_huge) +{ + return gfp; +} +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ + /* * shmem_getpage_gfp - find page in cache, or get from swap, or allocate * @@ -1867,6 +1908,8 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, switch (sbinfo->huge) { case SHMEM_HUGE_NEVER: goto alloc_nohuge; + case SHMEM_HUGE_ALWAYS: + goto alloc_huge; case SHMEM_HUGE_WITHIN_SIZE: { loff_t i_size; pgoff_t off; @@ -1887,6 +1930,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, } alloc_huge: + gfp = shmem_hugepage_gfpmask_fixup(gfp, sgp_huge); page = shmem_alloc_and_acct_page(gfp, inode, index, true); if (IS_ERR(page)) { alloc_nohuge: