From patchwork Fri Aug 12 01:28:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zach O'Keefe X-Patchwork-Id: 12941850 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BDA23C282E7 for ; Fri, 12 Aug 2022 01:29:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 03D038E0003; Thu, 11 Aug 2022 21:29:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EB6BB8E0002; Thu, 11 Aug 2022 21:29:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BA9D48E0003; Thu, 11 Aug 2022 21:29:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id A022A8E0002 for ; Thu, 11 Aug 2022 21:29:16 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 6F810A1A48 for ; Fri, 12 Aug 2022 01:29:16 +0000 (UTC) X-FDA: 79789207512.12.5A5426B Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf16.hostedemail.com (Postfix) with ESMTP id 0851918006F for ; Fri, 12 Aug 2022 01:29:15 +0000 (UTC) Received: by mail-pj1-f74.google.com with SMTP id 92-20020a17090a09e500b001d917022847so9268161pjo.1 for ; Thu, 11 Aug 2022 18:29:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=D+0/ITQwVbItb/OQYiw4XC83AjMLteP3+8N2xJTlJLM=; b=JxIJCzYb4mqrXGnra1dLEIm/ZgGM4ezcWBnJd0BPhnAVnf4tubR1FM1HOwi6lXF8b0 YyH11+U5hlCuW7KotDYIkd3TtlK2N45WNofPGDxKmqD48KicxtC6WEsODSAGkaqOF3XJ fX+OYpxq8bORNmlGIyz/7acJ8OU696zip1NhD/ptYoKzqbO1Wr7dtqs8AqjaXTQE8EQh 4fNMmdGHGGF0QihvgwppmgIC3/xbUtBElv5rWa6OHAyAz0Ic9R7maYJJlBSRvbGRPXzz ZX338W3+2nx5B9Amh0yZeM2AFgsA2jfgF4ufFSscgMnCZOh7CKP+bXA2+M/OfgPCOeaa 33Jg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=D+0/ITQwVbItb/OQYiw4XC83AjMLteP3+8N2xJTlJLM=; b=unDnWjd5O016OuSR68V0DSA3IyiJRdiy8HobAZH5mKl5ValAMrb7nyehm0mZ6304vq 8L1/AsVbUfwkInJ78b2BtHlSATs1H5pN7wDWhEoOKexD7ka2dQojEJQE+/J3WarKpO6l jIl1T+mMCV/qdpQIobnguaUO/OAll1FS3o7mUzsNJkZcMQ3zGw0mWImv5gsz3DJ05SWO jyZYic4cX4ghgkhf1UuOygKk1Sqnb+QIiWohcFBvG096/B0db3llF2S/mCIVfaRqacoc SSoq5w8za6q6SRvWiZgcQoz4oc3Im//+ewZNUdLf1FCRGB5tGIpv2VkjCjB4HkVxIbHj gGyw== X-Gm-Message-State: ACgBeo2AdjOmgYWxm3QXomu5IxZGTq1dPy4WXziWyAXmUi8TKBYQHw0m qCLYbPbfPs6l27zUO+ZoQslL7dEwRxVrgnwIXlIrWLO7a1V9RZg5c7MgVSSmDxlTt1BlBKZ1bwt ICjAjW8E8xRThQE1WddePxv7X/ZXhysgxomVY32lq9R9IpYSSiBpS3LPukjY= X-Google-Smtp-Source: AA6agR7zHRajCDohYZv9JxhzN9Yk3zD3QF7QSD8xFWdUETseqcvu4alImf31npPy5xjvP5q7sC7NitUEyedo X-Received: from zokeefe3.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1b6]) (user=zokeefe job=sendgmr) by 2002:a17:90b:4aca:b0:1f4:ea26:f589 with SMTP id mh10-20020a17090b4aca00b001f4ea26f589mr11221304pjb.142.1660267754356; Thu, 11 Aug 2022 18:29:14 -0700 (PDT) Date: Thu, 11 Aug 2022 18:28:36 -0700 In-Reply-To: <20220812012843.3948330-1-zokeefe@google.com> Message-Id: <20220812012843.3948330-3-zokeefe@google.com> Mime-Version: 1.0 References: <20220812012843.3948330-1-zokeefe@google.com> X-Mailer: git-send-email 2.37.1.559.g78731f0fdb-goog Subject: [PATCH mm-unstable 2/9] mm/khugepaged: attempt to map file/shmem-backed pte-mapped THPs by pmds From: "Zach O'Keefe" To: linux-mm@kvack.org Cc: Andrew Morton , linux-api@vger.kernel.org, Axel Rasmussen , James Houghton , Hugh Dickins , Yang Shi , Miaohe Lin , David Hildenbrand , David Rientjes , Matthew Wilcox , Pasha Tatashin , Peter Xu , Rongwei Wang , SeongJae Park , Song Liu , Vlastimil Babka , Chris Kennelly , "Kirill A. Shutemov" , Minchan Kim , Patrick Xia , "Zach O'Keefe" ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660267756; a=rsa-sha256; cv=none; b=vjrKCVOrePRvEudgIN+r1KdjabxU7zsUXeGSDieh4ijEGiZO6MEM3WDeQuDxEDrGRwBRox N0Tp5sBHBrvzV5AMxf/fO0Qu8BX7sT7d0gyamt4J9wsSAIb7SZAjrL5g1jkQs/7sFys7qM Cv2PH8JSuBxMoRl1eqmEPqE1FnholUs= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=JxIJCzYb; spf=pass (imf16.hostedemail.com: domain of 36qz1YgcKCDkujfZZaZbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--zokeefe.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=36qz1YgcKCDkujfZZaZbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--zokeefe.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660267756; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=D+0/ITQwVbItb/OQYiw4XC83AjMLteP3+8N2xJTlJLM=; b=TAa4DtUgxXczeAQFI78DqHfxcYZ0A8bjGl31GsKq2V0ffVSENfGGUtFYxmlSC+uCPFst2a kR6sG+zEIrWtSZwUdl8RFqFpn8PU7mM1kHkeCJlzL/WHNyEJJflHzHqhH0tp3dM0ywKgzC aPXrxPqKkvi6GL8NPtlp2d30JtmQErs= X-Stat-Signature: p3scjhppru9j1keyoidydeuthfoqt4wc X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 0851918006F Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=JxIJCzYb; spf=pass (imf16.hostedemail.com: domain of 36qz1YgcKCDkujfZZaZbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--zokeefe.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=36qz1YgcKCDkujfZZaZbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--zokeefe.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-HE-Tag: 1660267755-781172 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The main benefit of THPs are that they can be mapped at the pmd level, increasing the likelihood of TLB hit and spending less cycles in page table walks. pte-mapped hugepages - that is - hugepage-aligned compound pages of order HPAGE_PMD_ORDER - although being contiguous in physical memory, don't have this advantage. In fact, one could argue they are detrimental to system performance overall since they occupy a precious hugepage-aligned/sized region of physical memory that could otherwise be used more effectively. Additionally, pte-mapped hugepages can be the cheapest memory to collapse for khugepaged since no new hugepage allocation or copying of memory contents is necessary - we only need to update the mapping page tables. In the anonymous collapse path, we are able to collapse pte-mapped hugepages (albeit, perhaps suboptimally), but the file/shmem path makes no effort when compound pages (of any order) are encountered. Identify pte-mapped hugepages in the file/shmem collapse path. In khugepaged context, attempt to update page tables mapping this hugepage. Note that these collapses still count towards the /sys/kernel/mm/transparent_hugepage/khugepaged/pages_collapsed counter, and if the pte-mapped hugepage was also mapped into multiple process' address spaces, could be incremented for each page table update. Since we increment the counter when a pte-mapped hugepage is successfully added to the list of to-collapse pte-mapped THPs, it's possible that we never actually update the page table either. This is different from how file/shmem pages_collapsed accounting works today where only a successful page cache update is counted (it's also possible here that no page tables are actually changed). Though it incurs some slop, this is preferred to either not accounting for the event at all, or plumbing through data in struct mm_slot on whether to account for the collapse or not. Note that work still needs to be done to support arbitrary compound pages, and that this should all be converted to using folios. Signed-off-by: Zach O'Keefe --- include/trace/events/huge_memory.h | 1 + mm/khugepaged.c | 43 +++++++++++++++++++++++++----- 2 files changed, 38 insertions(+), 6 deletions(-) diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h index 55392bf30a03..fbbb25494d60 100644 --- a/include/trace/events/huge_memory.h +++ b/include/trace/events/huge_memory.h @@ -17,6 +17,7 @@ EM( SCAN_EXCEED_SHARED_PTE, "exceed_shared_pte") \ EM( SCAN_PTE_NON_PRESENT, "pte_non_present") \ EM( SCAN_PTE_UFFD_WP, "pte_uffd_wp") \ + EM( SCAN_PTE_MAPPED_HUGEPAGE, "pte_mapped_hugepage") \ EM( SCAN_PAGE_RO, "no_writable_page") \ EM( SCAN_LACK_REFERENCED_PAGE, "lack_referenced_page") \ EM( SCAN_PAGE_NULL, "page_null") \ diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 3e64105398c3..8165a1fc42dd 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -34,6 +34,7 @@ enum scan_result { SCAN_EXCEED_SHARED_PTE, SCAN_PTE_NON_PRESENT, SCAN_PTE_UFFD_WP, + SCAN_PTE_MAPPED_HUGEPAGE, SCAN_PAGE_RO, SCAN_LACK_REFERENCED_PAGE, SCAN_PAGE_NULL, @@ -1349,18 +1350,22 @@ static void collect_mm_slot(struct mm_slot *mm_slot) * Notify khugepaged that given addr of the mm is pte-mapped THP. Then * khugepaged should try to collapse the page table. */ -static void khugepaged_add_pte_mapped_thp(struct mm_struct *mm, +static bool khugepaged_add_pte_mapped_thp(struct mm_struct *mm, unsigned long addr) { struct mm_slot *mm_slot; + bool ret = false; VM_BUG_ON(addr & ~HPAGE_PMD_MASK); spin_lock(&khugepaged_mm_lock); mm_slot = get_mm_slot(mm); - if (likely(mm_slot && mm_slot->nr_pte_mapped_thp < MAX_PTE_MAPPED_THP)) + if (likely(mm_slot && mm_slot->nr_pte_mapped_thp < MAX_PTE_MAPPED_THP)) { mm_slot->pte_mapped_thp[mm_slot->nr_pte_mapped_thp++] = addr; + ret = true; + } spin_unlock(&khugepaged_mm_lock); + return ret; } static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *vma, @@ -1397,9 +1402,16 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr) pte_t *start_pte, *pte; pmd_t *pmd; spinlock_t *ptl; - int count = 0; + int count = 0, result = SCAN_FAIL; int i; + mmap_assert_write_locked(mm); + + /* Fast check before locking page if already PMD-mapped */ + result = find_pmd_or_thp_or_none(mm, haddr, &pmd); + if (result != SCAN_SUCCEED) + return; + if (!vma || !vma->vm_file || !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE)) return; @@ -1748,7 +1760,11 @@ static int collapse_file(struct mm_struct *mm, struct file *file, * we locked the first page, then a THP might be there already. */ if (PageTransCompound(page)) { - result = SCAN_PAGE_COMPOUND; + result = compound_order(page) == HPAGE_PMD_ORDER && + index == start + /* Maybe PMD-mapped */ + ? SCAN_PTE_MAPPED_HUGEPAGE + : SCAN_PAGE_COMPOUND; goto out_unlock; } @@ -1986,7 +2002,11 @@ static int khugepaged_scan_file(struct mm_struct *mm, struct file *file, * into a PMD sized page */ if (PageTransCompound(page)) { - result = SCAN_PAGE_COMPOUND; + result = compound_order(page) == HPAGE_PMD_ORDER && + xas.xa_index == start + /* Maybe PMD-mapped */ + ? SCAN_PTE_MAPPED_HUGEPAGE + : SCAN_PAGE_COMPOUND; break; } @@ -2132,8 +2152,19 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, &mmap_locked, cc); } - if (*result == SCAN_SUCCEED) + switch (*result) { + case SCAN_PTE_MAPPED_HUGEPAGE: + if (!khugepaged_add_pte_mapped_thp(mm, + khugepaged_scan.address)) + break; + fallthrough; + case SCAN_SUCCEED: ++khugepaged_pages_collapsed; + break; + default: + break; + } + /* move to next address */ khugepaged_scan.address += HPAGE_PMD_SIZE; progress += HPAGE_PMD_NR;