From patchwork Wed Feb 19 02:29:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Rientjes X-Patchwork-Id: 11390277 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 25C2B13A4 for ; Wed, 19 Feb 2020 02:29:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D78A520801 for ; Wed, 19 Feb 2020 02:29:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="F7wmfQVK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D78A520801 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DD2F56B0006; Tue, 18 Feb 2020 21:29:24 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D83476B0007; Tue, 18 Feb 2020 21:29:24 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C985D6B0008; Tue, 18 Feb 2020 21:29:24 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0033.hostedemail.com [216.40.44.33]) by kanga.kvack.org (Postfix) with ESMTP id B3B1A6B0006 for ; Tue, 18 Feb 2020 21:29:24 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 52F0740F0 for ; Wed, 19 Feb 2020 02:29:24 +0000 (UTC) X-FDA: 76505295048.26.nail53_34f46c55c991c X-Spam-Summary: 30,2,0,6204e7a66cebb8ab,d41d8cd98f00b204,rientjes@google.com,:akpm@linux-foundation.org:kirill.shutemov@linux.intel.com:rppt@linux.ibm.com:jcline@redhat.com:linux-kernel@vger.kernel.org:,RULES_HIT:41:355:379:800:960:973:982:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1535:1543:1593:1594:1711:1730:1747:1777:1792:1801:2393:2559:2562:3138:3139:3140:3141:3142:3152:3354:3865:3866:3867:3868:3870:3871:3874:4117:4605:5007:6119:6261:6653:7875:7903:8784:8957:10010:10400:11026:11473:11657:11658:11914:12043:12296:12297:12438:12517:12519:13138:13161:13229:13231:13439:14181:14659:14721:21080:21433:21444:21627:21740:21990:30054:30070,0,RBL:209.85.216.65:@google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: nail53_34f46c55c991c X-Filterd-Recvd-Size: 6990 Received: from mail-pj1-f65.google.com (mail-pj1-f65.google.com [209.85.216.65]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Wed, 19 Feb 2020 02:29:23 +0000 (UTC) Received: by mail-pj1-f65.google.com with SMTP id gv17so1867820pjb.1 for ; Tue, 18 Feb 2020 18:29:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=rNnCtkOc/6ohK9oTnWLWMg6U3VCtnXynqbtbc3y7Njc=; b=F7wmfQVKR5T8ELIXT4gkRwxg9XiiqP2XHkDlzlNKzdHGDz8960m6sciiKT8sIdaoBw DFgBuPbBij1lkPgKB7NZIU1qZzrrV4kv0vWX6qsbXyDwZ9nqZvtB3cuo7aL9uia75Ed/ 91s3f3Dhr+I47j6RPCDTGK5eb8lvq5McvlORHiKHcPdqPl2n3cAtgvSAMCoUsekKMNWo HtzyiZ5phTk1niQqv/hTzhmsSr42b6MGVP/5Vpo1CMpck/5byadu//sC78HGBDG905b0 qpC6wkii0qzfp7gTNV/dNs+3xjA+PTB8vFBe7SkjwfF3UL/PG6zEUIFFyjRs6YR8t4ae EcUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=rNnCtkOc/6ohK9oTnWLWMg6U3VCtnXynqbtbc3y7Njc=; b=JPbg9YvpaysqavFSPPPLygyPiJQnU+gMBfWko2dU30/ud9Ni4KHS8NBZd4B+a7txB2 YiH6+84uiLMNM+pUbEPMsT7PqAKxSaEtRAClm6xRsncLCnsbXyrLO8zTppRT7nnIJMuP WAax6ThwEeCPiBoNhVjhXkHI9oMGVlCVtk0gDKRZXPQwL7XGnBpo3/sGNNkPxBCZlQir hMzeerl7Sa7+RifHGOjuzrL3cuBapDfp8tnQhODa5A+0fd9syGKvpU1M7biokPRYhVrE Lyq5R8PH1hGWxUMEVskoQKFt8XIU7EIhIJnOoRM4iuXMDXBtd8Nfv5m0VeyxbMriFvfD OSkg== X-Gm-Message-State: APjAAAU2tf0n95DdwH6Gr1QDvVSVwQLCHUuLxqdNBfuIQ6iLFoVgfD6u TuqT/4y8JymEge95lnDmXWx/JQ== X-Google-Smtp-Source: APXvYqxAgulxkjBPCmHduNnt+n7qNB696wehVmi8XG3+039OMApFT1VNGvJyi7Y2w+L+E764qDw9dA== X-Received: by 2002:a17:902:d684:: with SMTP id v4mr24796967ply.14.1582079362552; Tue, 18 Feb 2020 18:29:22 -0800 (PST) Received: from [2620:15c:17:3:3a5:23a7:5e32:4598] ([2620:15c:17:3:3a5:23a7:5e32:4598]) by smtp.gmail.com with ESMTPSA id b4sm361786pfd.18.2020.02.18.18.29.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Feb 2020 18:29:21 -0800 (PST) Date: Tue, 18 Feb 2020 18:29:21 -0800 (PST) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Andrew Morton cc: "Kirill A. Shutemov" , Mike Rapoport , Jeremy Cline , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [patch 2/2] mm, thp: track fallbacks due to failed memcg charges separately In-Reply-To: Message-ID: References: User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The thp_fault_fallback stat in /proc/vmstat is incremented if either the hugepage allocation fails through the page allocator or the hugepage charge fails through mem cgroup. This patch leaves this field untouched but adds a new field, thp_fault_fallback_charge, which is incremented only when the mem cgroup charge fails. This distinguishes between faults that want to be backed by hugepages but fail due to fragmentation (or low memory conditions) and those that fail due to mem cgroup limits. That can be used to determine the impact of fragmentation on the system by excluding faults that failed due to memcg usage. Signed-off-by: David Rientjes Reviewed-by: Mike Rapoport # Documentation --- v2: - supported for shmem faults as well per Kirill - fixed worked in documentation and commit description per Mike Documentation/admin-guide/mm/transhuge.rst | 5 +++++ include/linux/vm_event_item.h | 1 + mm/huge_memory.c | 2 ++ mm/shmem.c | 4 +++- mm/vmstat.c | 1 + 5 files changed, 12 insertions(+), 1 deletion(-) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -310,6 +310,11 @@ thp_fault_fallback is incremented if a page fault fails to allocate a huge page and instead falls back to using small pages. +thp_fault_fallback_charge + is incremented if a page fault fails to charge a huge page and + instead falls back to using small pages even though the + allocation was successful. + thp_collapse_alloc_failed is incremented if khugepaged found a range of pages that should be collapsed into one huge page but failed diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -73,6 +73,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, #ifdef CONFIG_TRANSPARENT_HUGEPAGE THP_FAULT_ALLOC, THP_FAULT_FALLBACK, + THP_FAULT_FALLBACK_CHARGE, THP_COLLAPSE_ALLOC, THP_COLLAPSE_ALLOC_FAILED, THP_FILE_ALLOC, diff --git a/mm/huge_memory.c b/mm/huge_memory.c --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -597,6 +597,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, if (mem_cgroup_try_charge_delay(page, vma->vm_mm, gfp, &memcg, true)) { put_page(page); count_vm_event(THP_FAULT_FALLBACK); + count_vm_event(THP_FAULT_FALLBACK_CHARGE); return VM_FAULT_FALLBACK; } @@ -1406,6 +1407,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) put_page(page); ret |= VM_FAULT_FALLBACK; count_vm_event(THP_FAULT_FALLBACK); + count_vm_event(THP_FAULT_FALLBACK_CHARGE); goto out; } diff --git a/mm/shmem.c b/mm/shmem.c --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1872,8 +1872,10 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg, PageTransHuge(page)); if (error) { - if (vmf && PageTransHuge(page)) + if (vmf && PageTransHuge(page)) { count_vm_event(THP_FAULT_FALLBACK); + count_vm_event(THP_FAULT_FALLBACK_CHARGE); + } goto unacct; } error = shmem_add_to_page_cache(page, mapping, hindex, diff --git a/mm/vmstat.c b/mm/vmstat.c --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1254,6 +1254,7 @@ const char * const vmstat_text[] = { #ifdef CONFIG_TRANSPARENT_HUGEPAGE "thp_fault_alloc", "thp_fault_fallback", + "thp_fault_fallback_charge", "thp_collapse_alloc", "thp_collapse_alloc_failed", "thp_file_alloc",