From patchwork Wed Feb 19 02:29:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Rientjes X-Patchwork-Id: 11390275 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B2C8313A4 for ; Wed, 19 Feb 2020 02:29:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7599B2467E for ; Wed, 19 Feb 2020 02:29:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="EZ9SJk6G" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7599B2467E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AEB6C6B0003; Tue, 18 Feb 2020 21:29:22 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A9C5D6B0006; Tue, 18 Feb 2020 21:29:22 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9DA876B0007; Tue, 18 Feb 2020 21:29:22 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0050.hostedemail.com [216.40.44.50]) by kanga.kvack.org (Postfix) with ESMTP id 86AFF6B0003 for ; Tue, 18 Feb 2020 21:29:22 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id F1E228248047 for ; Wed, 19 Feb 2020 02:29:21 +0000 (UTC) X-FDA: 76505294964.09.thing94_349758e10922a X-Spam-Summary: 2,0,0,a6c3858c91af8ab2,d41d8cd98f00b204,rientjes@google.com,:akpm@linux-foundation.org:kirill.shutemov@linux.intel.com:rppt@linux.ibm.com:jcline@redhat.com:linux-kernel@vger.kernel.org:,RULES_HIT:41:355:379:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1535:1542:1593:1594:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3152:3354:3865:3867:3868:3871:3874:5007:6119:6120:6261:6653:7901:7903:8957:10004:10400:10450:10455:11026:11473:11658:11914:12043:12297:12438:12517:12519:13161:13229:13439:14096:14097:14181:14659:14721:19904:19999:21080:21444:21611:21627:21990:30054:30075,0,RBL:209.85.215.195:@google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: thing94_349758e10922a X-Filterd-Recvd-Size: 5777 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Wed, 19 Feb 2020 02:29:21 +0000 (UTC) Received: by mail-pg1-f195.google.com with SMTP id b9so11793466pgk.12 for ; Tue, 18 Feb 2020 18:29:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=Lwgaqpl0HZQ1P9McupBrS5OPeUo6B2/L0nmFQFzJpZ0=; b=EZ9SJk6GGVAI+GAOUJ/e5T3k0xmSQAgenxkgUj3FNHcdryseTmuv3u6iNykko5/4Sx 940TJ11ToPsGThl+bY/R7CTI2jt0YDJEyKjeITxDxmCEacKKwplsxrSGcNzBB+BMaFpS maWI9j6+FOBv2LtRAzave9vRkSfJ1nr6V/UI/xvPeOrY7TASlCzQ+i8ijO2XkXX5+TQu A8zI/anJz8BbQS17k0ruVajAiyPORd3NqtF+k+C5U7kr3yc4nsnhHEkWkKqU8D9YSI6e WtEKKcrWQTGXayvTLGiTFyVob2adLXHr5ocmXKchx/ZsOfAqNRdsbKyaAexrLziwqcJU ScQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=Lwgaqpl0HZQ1P9McupBrS5OPeUo6B2/L0nmFQFzJpZ0=; b=HYz/+kGtsZe7fV9E7ea12bZx2C043JMkkiCwqWCT8VuCtI8bE+QMzu4sH4ZKitSEEX 56fq4svJkPQ+9k/UWwqCxJ6AO43fURqe0Jd3TWWTICeErNnZe4pBjRxa2W5uvJRmkyMr TB1WflbKBgTWoUFGlASPUmcn5FP1neIRiK82OJue5UVC9kmXf73UPAjX93sVP4yboF/v zzPGotiqX6jM0u3nND2Z8j7bdePg9DIaICnq+PeEtpNNpukyGhVQWhAviHlLC0p71s+7 tR54OOyb8NAJ8TNvFLvZELvPuZOQBzADUXrQHyIXWEUyXDwOupZ4m5+TNtGTAY9qS6Ds cI2Q== X-Gm-Message-State: APjAAAXx3j3wuj6IEgivsRdGC7RqD6SG/MfRFmW4GVX4WETW0fBlL1Iz ny4c3TyntFiggOpHQe4qkqgaAg== X-Google-Smtp-Source: APXvYqy7aU7raQfgCHEI7AbrGHGmPKQaKeC5I1nufKOMorOVg8CvFXs8yw3rsK3JutEaK2w5uM/4KQ== X-Received: by 2002:a65:5242:: with SMTP id q2mr25082514pgp.74.1582079360199; Tue, 18 Feb 2020 18:29:20 -0800 (PST) Received: from [2620:15c:17:3:3a5:23a7:5e32:4598] ([2620:15c:17:3:3a5:23a7:5e32:4598]) by smtp.gmail.com with ESMTPSA id l13sm257529pjq.23.2020.02.18.18.29.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Feb 2020 18:29:19 -0800 (PST) Date: Tue, 18 Feb 2020 18:29:18 -0800 (PST) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Andrew Morton cc: "Kirill A. Shutemov" , Mike Rapoport , Jeremy Cline , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [patch 1/2] mm, shmem: add thp fault alloc and fallback stats In-Reply-To: Message-ID: References: User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The thp_fault_alloc and thp_fault_fallback vmstats are incremented when a hugepage is successfully or unsuccessfully allocated, respectively, during a page fault for anonymous memory. Extend this to shmem as well. Note that care is taken to increment thp_fault_alloc only when the fault succeeds; this is the same behavior as anonymous thp. Signed-off-by: David Rientjes --- mm/shmem.c | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1502,9 +1502,8 @@ static struct page *shmem_alloc_page(gfp_t gfp, return page; } -static struct page *shmem_alloc_and_acct_page(gfp_t gfp, - struct inode *inode, - pgoff_t index, bool huge) +static struct page *shmem_alloc_and_acct_page(gfp_t gfp, struct inode *inode, + pgoff_t index, bool fault, bool huge) { struct shmem_inode_info *info = SHMEM_I(inode); struct page *page; @@ -1518,9 +1517,11 @@ static struct page *shmem_alloc_and_acct_page(gfp_t gfp, if (!shmem_inode_acct_block(inode, nr)) goto failed; - if (huge) + if (huge) { page = shmem_alloc_hugepage(gfp, info, index); - else + if (!page && fault) + count_vm_event(THP_FAULT_FALLBACK); + } else page = shmem_alloc_page(gfp, info, index); if (page) { __SetPageLocked(page); @@ -1832,11 +1833,10 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, } alloc_huge: - page = shmem_alloc_and_acct_page(gfp, inode, index, true); + page = shmem_alloc_and_acct_page(gfp, inode, index, vmf, true); if (IS_ERR(page)) { alloc_nohuge: - page = shmem_alloc_and_acct_page(gfp, inode, - index, false); + page = shmem_alloc_and_acct_page(gfp, inode, index, vmf, false); } if (IS_ERR(page)) { int retry = 5; @@ -1871,8 +1871,11 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg, PageTransHuge(page)); - if (error) + if (error) { + if (vmf && PageTransHuge(page)) + count_vm_event(THP_FAULT_FALLBACK); goto unacct; + } error = shmem_add_to_page_cache(page, mapping, hindex, NULL, gfp & GFP_RECLAIM_MASK); if (error) { @@ -1883,6 +1886,8 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, mem_cgroup_commit_charge(page, memcg, false, PageTransHuge(page)); lru_cache_add_anon(page); + if (vmf && PageTransHuge(page)) + count_vm_event(THP_FAULT_ALLOC); spin_lock_irq(&info->lock); info->alloced += compound_nr(page); From patchwork Wed Feb 19 02:29:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Rientjes X-Patchwork-Id: 11390277 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 25C2B13A4 for ; Wed, 19 Feb 2020 02:29:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D78A520801 for ; Wed, 19 Feb 2020 02:29:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="F7wmfQVK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D78A520801 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DD2F56B0006; Tue, 18 Feb 2020 21:29:24 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D83476B0007; Tue, 18 Feb 2020 21:29:24 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C985D6B0008; Tue, 18 Feb 2020 21:29:24 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0033.hostedemail.com [216.40.44.33]) by kanga.kvack.org (Postfix) with ESMTP id B3B1A6B0006 for ; Tue, 18 Feb 2020 21:29:24 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 52F0740F0 for ; Wed, 19 Feb 2020 02:29:24 +0000 (UTC) X-FDA: 76505295048.26.nail53_34f46c55c991c X-Spam-Summary: 30,2,0,6204e7a66cebb8ab,d41d8cd98f00b204,rientjes@google.com,:akpm@linux-foundation.org:kirill.shutemov@linux.intel.com:rppt@linux.ibm.com:jcline@redhat.com:linux-kernel@vger.kernel.org:,RULES_HIT:41:355:379:800:960:973:982:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1535:1543:1593:1594:1711:1730:1747:1777:1792:1801:2393:2559:2562:3138:3139:3140:3141:3142:3152:3354:3865:3866:3867:3868:3870:3871:3874:4117:4605:5007:6119:6261:6653:7875:7903:8784:8957:10010:10400:11026:11473:11657:11658:11914:12043:12296:12297:12438:12517:12519:13138:13161:13229:13231:13439:14181:14659:14721:21080:21433:21444:21627:21740:21990:30054:30070,0,RBL:209.85.216.65:@google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: nail53_34f46c55c991c X-Filterd-Recvd-Size: 6990 Received: from mail-pj1-f65.google.com (mail-pj1-f65.google.com [209.85.216.65]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Wed, 19 Feb 2020 02:29:23 +0000 (UTC) Received: by mail-pj1-f65.google.com with SMTP id gv17so1867820pjb.1 for ; Tue, 18 Feb 2020 18:29:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=rNnCtkOc/6ohK9oTnWLWMg6U3VCtnXynqbtbc3y7Njc=; b=F7wmfQVKR5T8ELIXT4gkRwxg9XiiqP2XHkDlzlNKzdHGDz8960m6sciiKT8sIdaoBw DFgBuPbBij1lkPgKB7NZIU1qZzrrV4kv0vWX6qsbXyDwZ9nqZvtB3cuo7aL9uia75Ed/ 91s3f3Dhr+I47j6RPCDTGK5eb8lvq5McvlORHiKHcPdqPl2n3cAtgvSAMCoUsekKMNWo HtzyiZ5phTk1niQqv/hTzhmsSr42b6MGVP/5Vpo1CMpck/5byadu//sC78HGBDG905b0 qpC6wkii0qzfp7gTNV/dNs+3xjA+PTB8vFBe7SkjwfF3UL/PG6zEUIFFyjRs6YR8t4ae EcUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=rNnCtkOc/6ohK9oTnWLWMg6U3VCtnXynqbtbc3y7Njc=; b=JPbg9YvpaysqavFSPPPLygyPiJQnU+gMBfWko2dU30/ud9Ni4KHS8NBZd4B+a7txB2 YiH6+84uiLMNM+pUbEPMsT7PqAKxSaEtRAClm6xRsncLCnsbXyrLO8zTppRT7nnIJMuP WAax6ThwEeCPiBoNhVjhXkHI9oMGVlCVtk0gDKRZXPQwL7XGnBpo3/sGNNkPxBCZlQir hMzeerl7Sa7+RifHGOjuzrL3cuBapDfp8tnQhODa5A+0fd9syGKvpU1M7biokPRYhVrE Lyq5R8PH1hGWxUMEVskoQKFt8XIU7EIhIJnOoRM4iuXMDXBtd8Nfv5m0VeyxbMriFvfD OSkg== X-Gm-Message-State: APjAAAU2tf0n95DdwH6Gr1QDvVSVwQLCHUuLxqdNBfuIQ6iLFoVgfD6u TuqT/4y8JymEge95lnDmXWx/JQ== X-Google-Smtp-Source: APXvYqxAgulxkjBPCmHduNnt+n7qNB696wehVmi8XG3+039OMApFT1VNGvJyi7Y2w+L+E764qDw9dA== X-Received: by 2002:a17:902:d684:: with SMTP id v4mr24796967ply.14.1582079362552; Tue, 18 Feb 2020 18:29:22 -0800 (PST) Received: from [2620:15c:17:3:3a5:23a7:5e32:4598] ([2620:15c:17:3:3a5:23a7:5e32:4598]) by smtp.gmail.com with ESMTPSA id b4sm361786pfd.18.2020.02.18.18.29.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Feb 2020 18:29:21 -0800 (PST) Date: Tue, 18 Feb 2020 18:29:21 -0800 (PST) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Andrew Morton cc: "Kirill A. Shutemov" , Mike Rapoport , Jeremy Cline , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [patch 2/2] mm, thp: track fallbacks due to failed memcg charges separately In-Reply-To: Message-ID: References: User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The thp_fault_fallback stat in /proc/vmstat is incremented if either the hugepage allocation fails through the page allocator or the hugepage charge fails through mem cgroup. This patch leaves this field untouched but adds a new field, thp_fault_fallback_charge, which is incremented only when the mem cgroup charge fails. This distinguishes between faults that want to be backed by hugepages but fail due to fragmentation (or low memory conditions) and those that fail due to mem cgroup limits. That can be used to determine the impact of fragmentation on the system by excluding faults that failed due to memcg usage. Signed-off-by: David Rientjes Reviewed-by: Mike Rapoport # Documentation --- v2: - supported for shmem faults as well per Kirill - fixed worked in documentation and commit description per Mike Documentation/admin-guide/mm/transhuge.rst | 5 +++++ include/linux/vm_event_item.h | 1 + mm/huge_memory.c | 2 ++ mm/shmem.c | 4 +++- mm/vmstat.c | 1 + 5 files changed, 12 insertions(+), 1 deletion(-) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -310,6 +310,11 @@ thp_fault_fallback is incremented if a page fault fails to allocate a huge page and instead falls back to using small pages. +thp_fault_fallback_charge + is incremented if a page fault fails to charge a huge page and + instead falls back to using small pages even though the + allocation was successful. + thp_collapse_alloc_failed is incremented if khugepaged found a range of pages that should be collapsed into one huge page but failed diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -73,6 +73,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, #ifdef CONFIG_TRANSPARENT_HUGEPAGE THP_FAULT_ALLOC, THP_FAULT_FALLBACK, + THP_FAULT_FALLBACK_CHARGE, THP_COLLAPSE_ALLOC, THP_COLLAPSE_ALLOC_FAILED, THP_FILE_ALLOC, diff --git a/mm/huge_memory.c b/mm/huge_memory.c --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -597,6 +597,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, if (mem_cgroup_try_charge_delay(page, vma->vm_mm, gfp, &memcg, true)) { put_page(page); count_vm_event(THP_FAULT_FALLBACK); + count_vm_event(THP_FAULT_FALLBACK_CHARGE); return VM_FAULT_FALLBACK; } @@ -1406,6 +1407,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) put_page(page); ret |= VM_FAULT_FALLBACK; count_vm_event(THP_FAULT_FALLBACK); + count_vm_event(THP_FAULT_FALLBACK_CHARGE); goto out; } diff --git a/mm/shmem.c b/mm/shmem.c --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1872,8 +1872,10 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg, PageTransHuge(page)); if (error) { - if (vmf && PageTransHuge(page)) + if (vmf && PageTransHuge(page)) { count_vm_event(THP_FAULT_FALLBACK); + count_vm_event(THP_FAULT_FALLBACK_CHARGE); + } goto unacct; } error = shmem_add_to_page_cache(page, mapping, hindex, diff --git a/mm/vmstat.c b/mm/vmstat.c --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1254,6 +1254,7 @@ const char * const vmstat_text[] = { #ifdef CONFIG_TRANSPARENT_HUGEPAGE "thp_fault_alloc", "thp_fault_fallback", + "thp_fault_fallback_charge", "thp_collapse_alloc", "thp_collapse_alloc_failed", "thp_file_alloc",