From patchwork Tue Dec 11 14:36:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 10723915 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E83C21869 for ; Tue, 11 Dec 2018 14:36:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D3FE829B27 for ; Tue, 11 Dec 2018 14:36:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C4C002A04F; Tue, 11 Dec 2018 14:36:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0EC8829B27 for ; Tue, 11 Dec 2018 14:36:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 990AE8E009B; Tue, 11 Dec 2018 09:36:53 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 941BA8E0095; Tue, 11 Dec 2018 09:36:53 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8301B8E009B; Tue, 11 Dec 2018 09:36:53 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f69.google.com (mail-ed1-f69.google.com [209.85.208.69]) by kanga.kvack.org (Postfix) with ESMTP id 4BB4A8E0095 for ; Tue, 11 Dec 2018 09:36:53 -0500 (EST) Received: by mail-ed1-f69.google.com with SMTP id d41so6871931eda.12 for ; Tue, 11 Dec 2018 06:36:53 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=yn8BR9+lOdEHj+S5k9LAQxa/vCvc2yI6zMn2XkSMhkk=; b=LRQhpy5oj3pBfeLvxa5e+wM2SnJcSYxHnc23NTCYLmdZM6g+XEuGnv9qC3EdFqOX2b Vk8zKXcg2R6oaN25kmF1Xb4fuABufUIgpqmlnXmBOZ/KiQiBt0frT72VF1XTZACdlWBU ORGS2QGP9VhAf0QU6Ifmhl8qzdcHP2VNC0sLzh2ZWIygUGDsmOrUcPYhurgx+yvqAPKY hamcD6YUX/e5vTS6CaRGT1vVqqke2jfqbtsJkMre8oDaK/uAl5jaa431trJibzcRJQK2 0aRx/F/0FyBtilZDzoANlHYuWBPPRm1mUvXju0Z6zUW3p64gbIRAba3OrHs6Kf3z3ac+ lW3A== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of mstsxfx@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=mstsxfx@gmail.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org X-Gm-Message-State: AA+aEWZl31dKbo3eTx53X3aWdwNEXnfrDq6GYlyYp67yziL+/elxNzYo 8u8IJfpxsVlCh6w3JWwxL+GBraoaQmWpQ4GLnAOJRzGVfmlIx3IkgVdHI6Qa/V5wbHh4y1/lbS4 XoWD3E7pWsgdiVvlsOykM+swmwSsIK/pQCDkGfvZs4fuzeE6zq9PuF1YZlIjPfhNV0egdN+Iila b+Z2fO+w7BpupAjxA4wkahdpDe75UA6TD62rRYH9X9KdywiM+pDEVxJoUutFr3Ip4duhFKukrhk hpyQtIVDFteyVyWO2bWLQiaFdYTsTo+cJeBj3Emfbeag49fdVRAEv6m+13p4tupaX+phuAcW2xF Fj10iPcVl3z3pp76fCmpsGwqRpCnmua2OPNDbkl567TGswn/bH2dQOr7ALb1FZ3DaQGWBWlWdA= = X-Received: by 2002:a50:a786:: with SMTP id i6mr15325208edc.37.1544539012775; Tue, 11 Dec 2018 06:36:52 -0800 (PST) X-Received: by 2002:a50:a786:: with SMTP id i6mr15325153edc.37.1544539011609; Tue, 11 Dec 2018 06:36:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544539011; cv=none; d=google.com; s=arc-20160816; b=0p+EHfuay5j2d6PwaOw24hGsZkYQcrRu7P51VBE5SL+59S3tb1daDFTSHevVBw7KTG vfGZ37SIenOwOPpAX4Ls1MmH6sIYxz2iWIaL1N0iRb8OQX0wxjGaUfl7gdztfNmokvvJ ib012xdYNtiT51PuUpghAJ5xTTh8/Uc/kMuMUv41Tsxr7hJEo6Z8Z/KUS/xYSpxPxkvQ 2RXCT++/qbgGu0rlcQLTMWpLpWY+RXz0rPhVtLEv7p4Fv+tVlQRUvDLQT+HUtTAW3tDe GDpvCoiqJY7kuPP9repcI2QF1M9+XRg7dfge5RrhZAQMOMA1IS8RLQKtc8lkV2n0eR60 e8Gg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=yn8BR9+lOdEHj+S5k9LAQxa/vCvc2yI6zMn2XkSMhkk=; b=K7V8Dmm2p6rPQZFU/ZGu7kCL3+3fjimek5cD0REE4WJwXUov7Rntv6WvKOyTg5SEpB 49LrFmv3rxTxTkIYFXIywkhh3r4A6qQ2lKISFC/rdUUc+tEH7vH87Dqwqhhz0hpdVnaq Q9Xx6oWUpzH8Pj0cDyayOp55+di5puctVopuhA4MVRWSI3anFwbXcvlNbFyh1frVGkfL Ct9sljUfzP1FLQxr66SqMUtDZAHLJaHT4TcORcl7loDShnF+lNAF1b6T1yQtuCC1WC46 Zz1M7nqpnuEU0gzL5I19pStd6kEh8RCcou4celgf35/95PHpZtPLJoqit8RpttbKnJlc oNAQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of mstsxfx@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=mstsxfx@gmail.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id o21-v6sor4054180ejh.52.2018.12.11.06.36.51 for (Google Transport Security); Tue, 11 Dec 2018 06:36:51 -0800 (PST) Received-SPF: pass (google.com: domain of mstsxfx@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; spf=pass (google.com: domain of mstsxfx@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=mstsxfx@gmail.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org X-Google-Smtp-Source: AFSGD/XQtXKp80kyJgHG6zCu7EpQG1LyRcH24eBYdiHl4bEw5TigaHZwZOfm+qG1WuSdxmWMS2ZO4w== X-Received: by 2002:a17:906:3712:: with SMTP id d18-v6mr12587443ejc.126.1544539011161; Tue, 11 Dec 2018 06:36:51 -0800 (PST) Received: from tiehlicka.suse.cz (prg-ext-pat.suse.com. [213.151.95.130]) by smtp.gmail.com with ESMTPSA id z40sm4017084edz.86.2018.12.11.06.36.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 11 Dec 2018 06:36:50 -0800 (PST) From: Michal Hocko To: Andrew Morton Cc: linux-api@vger.kernel.org, , LKML , Michal Hocko , Vlastimil Babka Subject: [PATCH 2/3] mm, thp, proc: report THP eligibility for each vma Date: Tue, 11 Dec 2018 15:36:40 +0100 Message-Id: <20181211143641.3503-3-mhocko@kernel.org> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181211143641.3503-1-mhocko@kernel.org> References: <20181211143641.3503-1-mhocko@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Michal Hocko Userspace falls short when trying to find out whether a specific memory range is eligible for THP. There are usecases that would like to know that http://lkml.kernel.org/r/alpine.DEB.2.21.1809251248450.50347@chino.kir.corp.google.com : This is used to identify heap mappings that should be able to fault thp : but do not, and they normally point to a low-on-memory or fragmentation : issue. The only way to deduce this now is to query for hg resp. nh flags and confronting the state with the global setting. Except that there is also PR_SET_THP_DISABLE that might change the picture. So the final logic is not trivial. Moreover the eligibility of the vma depends on the type of VMA as well. In the past we have supported only anononymous memory VMAs but things have changed and shmem based vmas are supported as well these days and the query logic gets even more complicated because the eligibility depends on the mount option and another global configuration knob. Simplify the current state and report the THP eligibility in /proc//smaps for each existing vma. Reuse transparent_hugepage_enabled for this purpose. The original implementation of this function assumes that the caller knows that the vma itself is supported for THP so make the core checks into __transparent_hugepage_enabled and use it for existing callers. __show_smap just use the new transparent_hugepage_enabled which also checks the vma support status (please note that this one has to be out of line due to include dependency issues). Acked-by: Vlastimil Babka Signed-off-by: Michal Hocko --- Documentation/filesystems/proc.txt | 3 +++ fs/proc/task_mmu.c | 2 ++ include/linux/huge_mm.h | 13 ++++++++++++- mm/huge_memory.c | 12 +++++++++++- mm/memory.c | 4 ++-- 5 files changed, 30 insertions(+), 4 deletions(-) diff --git a/Documentation/filesystems/proc.txt b/Documentation/filesystems/proc.txt index 2a4e63f5122c..cd465304bec4 100644 --- a/Documentation/filesystems/proc.txt +++ b/Documentation/filesystems/proc.txt @@ -425,6 +425,7 @@ SwapPss: 0 kB KernelPageSize: 4 kB MMUPageSize: 4 kB Locked: 0 kB +THPeligible: 0 VmFlags: rd ex mr mw me dw the first of these lines shows the same information as is displayed for the @@ -462,6 +463,8 @@ replaced by copy-on-write) part of the underlying shmem object out on swap. "SwapPss" shows proportional swap share of this mapping. Unlike "Swap", this does not take into account swapped out page of underlying shmem objects. "Locked" indicates whether the mapping is locked in memory or not. +"THPeligible" indicates whether the mapping is eligible for THP pages - 1 if +true, 0 otherwise. "VmFlags" field deserves a separate description. This member represents the kernel flags associated with the particular virtual memory area in two letter encoded diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 47c3764c469b..c9f160eb9fbc 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -790,6 +790,8 @@ static int show_smap(struct seq_file *m, void *v) __show_smap(m, &mss); + seq_printf(m, "THPeligible: %d\n", transparent_hugepage_enabled(vma)); + if (arch_pkeys_enabled()) seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma)); show_smap_vma_flags(m, vma); diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 4663ee96cf59..381e872bfde0 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -93,7 +93,11 @@ extern bool is_vma_temporary_stack(struct vm_area_struct *vma); extern unsigned long transparent_hugepage_flags; -static inline bool transparent_hugepage_enabled(struct vm_area_struct *vma) +/* + * to be used on vmas which are known to support THP. + * Use transparent_hugepage_enabled otherwise + */ +static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) { if (vma->vm_flags & VM_NOHUGEPAGE) return false; @@ -117,6 +121,8 @@ static inline bool transparent_hugepage_enabled(struct vm_area_struct *vma) return false; } +bool transparent_hugepage_enabled(struct vm_area_struct *vma); + #define transparent_hugepage_use_zero_page() \ (transparent_hugepage_flags & \ (1<vm_file->f_mapping) && shmem_huge_enabled(vma)) + return __transparent_hugepage_enabled(vma); + + return false; +} + static struct page *get_huge_zero_page(void) { struct page *zero_page; @@ -1303,7 +1313,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) get_page(page); spin_unlock(vmf->ptl); alloc: - if (transparent_hugepage_enabled(vma) && + if (__transparent_hugepage_enabled(vma) && !transparent_hugepage_debug_cow()) { huge_gfp = alloc_hugepage_direct_gfpmask(vma, haddr); new_page = alloc_pages_vma(huge_gfp, HPAGE_PMD_ORDER, vma, diff --git a/mm/memory.c b/mm/memory.c index 4ad2d293ddc2..3c2716ec7fbd 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3830,7 +3830,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, vmf.pud = pud_alloc(mm, p4d, address); if (!vmf.pud) return VM_FAULT_OOM; - if (pud_none(*vmf.pud) && transparent_hugepage_enabled(vma)) { + if (pud_none(*vmf.pud) && __transparent_hugepage_enabled(vma)) { ret = create_huge_pud(&vmf); if (!(ret & VM_FAULT_FALLBACK)) return ret; @@ -3856,7 +3856,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, vmf.pmd = pmd_alloc(mm, vmf.pud, address); if (!vmf.pmd) return VM_FAULT_OOM; - if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma)) { + if (pmd_none(*vmf.pmd) && __transparent_hugepage_enabled(vma)) { ret = create_huge_pmd(&vmf); if (!(ret & VM_FAULT_FALLBACK)) return ret;