From patchwork Wed Feb 19 02:29:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Rientjes X-Patchwork-Id: 11390275 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B2C8313A4 for ; Wed, 19 Feb 2020 02:29:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7599B2467E for ; Wed, 19 Feb 2020 02:29:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="EZ9SJk6G" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7599B2467E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AEB6C6B0003; Tue, 18 Feb 2020 21:29:22 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A9C5D6B0006; Tue, 18 Feb 2020 21:29:22 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9DA876B0007; Tue, 18 Feb 2020 21:29:22 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0050.hostedemail.com [216.40.44.50]) by kanga.kvack.org (Postfix) with ESMTP id 86AFF6B0003 for ; Tue, 18 Feb 2020 21:29:22 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id F1E228248047 for ; Wed, 19 Feb 2020 02:29:21 +0000 (UTC) X-FDA: 76505294964.09.thing94_349758e10922a X-Spam-Summary: 2,0,0,a6c3858c91af8ab2,d41d8cd98f00b204,rientjes@google.com,:akpm@linux-foundation.org:kirill.shutemov@linux.intel.com:rppt@linux.ibm.com:jcline@redhat.com:linux-kernel@vger.kernel.org:,RULES_HIT:41:355:379:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1535:1542:1593:1594:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3152:3354:3865:3867:3868:3871:3874:5007:6119:6120:6261:6653:7901:7903:8957:10004:10400:10450:10455:11026:11473:11658:11914:12043:12297:12438:12517:12519:13161:13229:13439:14096:14097:14181:14659:14721:19904:19999:21080:21444:21611:21627:21990:30054:30075,0,RBL:209.85.215.195:@google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: thing94_349758e10922a X-Filterd-Recvd-Size: 5777 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Wed, 19 Feb 2020 02:29:21 +0000 (UTC) Received: by mail-pg1-f195.google.com with SMTP id b9so11793466pgk.12 for ; Tue, 18 Feb 2020 18:29:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=Lwgaqpl0HZQ1P9McupBrS5OPeUo6B2/L0nmFQFzJpZ0=; b=EZ9SJk6GGVAI+GAOUJ/e5T3k0xmSQAgenxkgUj3FNHcdryseTmuv3u6iNykko5/4Sx 940TJ11ToPsGThl+bY/R7CTI2jt0YDJEyKjeITxDxmCEacKKwplsxrSGcNzBB+BMaFpS maWI9j6+FOBv2LtRAzave9vRkSfJ1nr6V/UI/xvPeOrY7TASlCzQ+i8ijO2XkXX5+TQu A8zI/anJz8BbQS17k0ruVajAiyPORd3NqtF+k+C5U7kr3yc4nsnhHEkWkKqU8D9YSI6e WtEKKcrWQTGXayvTLGiTFyVob2adLXHr5ocmXKchx/ZsOfAqNRdsbKyaAexrLziwqcJU ScQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=Lwgaqpl0HZQ1P9McupBrS5OPeUo6B2/L0nmFQFzJpZ0=; b=HYz/+kGtsZe7fV9E7ea12bZx2C043JMkkiCwqWCT8VuCtI8bE+QMzu4sH4ZKitSEEX 56fq4svJkPQ+9k/UWwqCxJ6AO43fURqe0Jd3TWWTICeErNnZe4pBjRxa2W5uvJRmkyMr TB1WflbKBgTWoUFGlASPUmcn5FP1neIRiK82OJue5UVC9kmXf73UPAjX93sVP4yboF/v zzPGotiqX6jM0u3nND2Z8j7bdePg9DIaICnq+PeEtpNNpukyGhVQWhAviHlLC0p71s+7 tR54OOyb8NAJ8TNvFLvZELvPuZOQBzADUXrQHyIXWEUyXDwOupZ4m5+TNtGTAY9qS6Ds cI2Q== X-Gm-Message-State: APjAAAXx3j3wuj6IEgivsRdGC7RqD6SG/MfRFmW4GVX4WETW0fBlL1Iz ny4c3TyntFiggOpHQe4qkqgaAg== X-Google-Smtp-Source: APXvYqy7aU7raQfgCHEI7AbrGHGmPKQaKeC5I1nufKOMorOVg8CvFXs8yw3rsK3JutEaK2w5uM/4KQ== X-Received: by 2002:a65:5242:: with SMTP id q2mr25082514pgp.74.1582079360199; Tue, 18 Feb 2020 18:29:20 -0800 (PST) Received: from [2620:15c:17:3:3a5:23a7:5e32:4598] ([2620:15c:17:3:3a5:23a7:5e32:4598]) by smtp.gmail.com with ESMTPSA id l13sm257529pjq.23.2020.02.18.18.29.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Feb 2020 18:29:19 -0800 (PST) Date: Tue, 18 Feb 2020 18:29:18 -0800 (PST) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Andrew Morton cc: "Kirill A. Shutemov" , Mike Rapoport , Jeremy Cline , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [patch 1/2] mm, shmem: add thp fault alloc and fallback stats In-Reply-To: Message-ID: References: User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The thp_fault_alloc and thp_fault_fallback vmstats are incremented when a hugepage is successfully or unsuccessfully allocated, respectively, during a page fault for anonymous memory. Extend this to shmem as well. Note that care is taken to increment thp_fault_alloc only when the fault succeeds; this is the same behavior as anonymous thp. Signed-off-by: David Rientjes --- mm/shmem.c | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1502,9 +1502,8 @@ static struct page *shmem_alloc_page(gfp_t gfp, return page; } -static struct page *shmem_alloc_and_acct_page(gfp_t gfp, - struct inode *inode, - pgoff_t index, bool huge) +static struct page *shmem_alloc_and_acct_page(gfp_t gfp, struct inode *inode, + pgoff_t index, bool fault, bool huge) { struct shmem_inode_info *info = SHMEM_I(inode); struct page *page; @@ -1518,9 +1517,11 @@ static struct page *shmem_alloc_and_acct_page(gfp_t gfp, if (!shmem_inode_acct_block(inode, nr)) goto failed; - if (huge) + if (huge) { page = shmem_alloc_hugepage(gfp, info, index); - else + if (!page && fault) + count_vm_event(THP_FAULT_FALLBACK); + } else page = shmem_alloc_page(gfp, info, index); if (page) { __SetPageLocked(page); @@ -1832,11 +1833,10 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, } alloc_huge: - page = shmem_alloc_and_acct_page(gfp, inode, index, true); + page = shmem_alloc_and_acct_page(gfp, inode, index, vmf, true); if (IS_ERR(page)) { alloc_nohuge: - page = shmem_alloc_and_acct_page(gfp, inode, - index, false); + page = shmem_alloc_and_acct_page(gfp, inode, index, vmf, false); } if (IS_ERR(page)) { int retry = 5; @@ -1871,8 +1871,11 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg, PageTransHuge(page)); - if (error) + if (error) { + if (vmf && PageTransHuge(page)) + count_vm_event(THP_FAULT_FALLBACK); goto unacct; + } error = shmem_add_to_page_cache(page, mapping, hindex, NULL, gfp & GFP_RECLAIM_MASK); if (error) { @@ -1883,6 +1886,8 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, mem_cgroup_commit_charge(page, memcg, false, PageTransHuge(page)); lru_cache_add_anon(page); + if (vmf && PageTransHuge(page)) + count_vm_event(THP_FAULT_ALLOC); spin_lock_irq(&info->lock); info->alloced += compound_nr(page);