From patchwork Wed Feb 23 19:48:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joao Martins X-Patchwork-Id: 12757476 Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B03866AB2 for ; Wed, 23 Feb 2022 19:49:03 +0000 (UTC) Received: from pps.filterd (m0246631.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 21NIDshZ001897; Wed, 23 Feb 2022 19:48:42 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-type : mime-version; s=corp-2021-07-09; bh=vnnXMBZj0A/sh8+m2A+He/aevlDFglchx3FpBQ2zop4=; b=kAqAa6w0TQx3A2kD06aoPEaf7cu5iIs2f199oEqkyA1IPoNEiMUV/5Y+RwDwDLlQ3QY0 6wOh5wYuDTwZflR4jegXw9Ksm1Pg/occeLCecJ+lxhO5wWzymjy3/bdssGmzLr0olB1o FTEWl4ctf3vwWr286g0BZfVNy1N1l2jaB+IvAulz6G/1p54QcZMEMIWoyhBYugUL/YFs FcVUDkUlZEX5OOdlxWWH+lsjbai3cYowmW5AWEWyjlWlrpwKcQupNt2RKM3p0L2JclsE WXuJmotUTIcL/RJjk8O0rH1ahaoVXZzJSmPrjhCDbOv1pPEc8cKz66PLTqf/KIk0gDUe Kg== Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70]) by mx0b-00069f02.pphosted.com with ESMTP id 3ecv6ew6hj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 23 Feb 2022 19:48:41 +0000 Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1]) by aserp3020.oracle.com (8.16.1.2/8.16.1.2) with SMTP id 21NJeWJ1047035; Wed, 23 Feb 2022 19:48:41 GMT Received: from nam10-bn7-obe.outbound.protection.outlook.com (mail-bn7nam10lp2105.outbound.protection.outlook.com [104.47.70.105]) by aserp3020.oracle.com with ESMTP id 3eb482vxks-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 23 Feb 2022 19:48:40 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=GUpb6koK7aV8QYEOodtozXkhSsWiOuPsaHdePCBMYxbKFqfY85+6CFQ00nnGsWZpZ9BZmkkxLuCstUwol3lrhaeY74S/kOZzmu6AC6WttpzHIybSCX1EKB9C1CPpQVYerOMZ/uaH+WMJWlWfe6ijNNpiRUyl/F0fiMiz5eCVGfajEzR6BW5EzxSPg0jQJULaT7YXfvCCjAKqtNQHVlU2i9yKGb2FiabLLCpmgFjOfmHj08+HBg5iVendH1FQ3dx5txyLEfCxRjaTFQsQYHKwjzZBWy9WSvEZh3JJSR6PoU/vZ4qq9uNUfWwDnXdkiKiypZZe8CkHj2oTiuLSr+0BCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=vnnXMBZj0A/sh8+m2A+He/aevlDFglchx3FpBQ2zop4=; b=dn3eOzAtTmYyuny7XjIYz5vwJADNzjZUh9S/TVVdhKFQHK12iIa2zDDqhPQF3Mcc4FNiZCHRIdD8xtIUKXcO22oGIf2sgYjVyHKZznC4SF0cjEb5t6CWH4mUu1UXEXlpQC43VFd7MXp2glc6RGZdV2/TcoZC3rs0dWr6ha7d6a1caP21rnIzO+9Jm8Q/nvP/qCMPmmrKPzyhrP6uR1sbzebhXILI+zIWtu2QxmYw/qGJs+B8Ukv2y1iDlwobwS/8GgbWQTvsr1MJy4af+joRuXexWcrEADYkhrbKK9IjO1Na8IYX1hCc8ACoVnIJP38wE0TgP60nIQE22jYtj/qTPA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vnnXMBZj0A/sh8+m2A+He/aevlDFglchx3FpBQ2zop4=; b=TSz9n7KtpjMchz/kWp1pfH9w3Rmrz0wkWNGF8kKTnM6j+JFEhCCVTKyfrsZ91xLPKDmFy+JPtXd0YM5KNwXD6B7ymoO/jjvbNAiny6Ul3ViBYfmsup+TniKmWCXRluqh7dDZTRGrjnjUoV7G794ovfkHBv6dE/vJB2YSLMbTP8U= Received: from BLAPR10MB4835.namprd10.prod.outlook.com (2603:10b6:208:331::11) by BLAPR10MB4930.namprd10.prod.outlook.com (2603:10b6:208:323::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.21; Wed, 23 Feb 2022 19:48:34 +0000 Received: from BLAPR10MB4835.namprd10.prod.outlook.com ([fe80::750f:bf1d:1599:3406]) by BLAPR10MB4835.namprd10.prod.outlook.com ([fe80::750f:bf1d:1599:3406%6]) with mapi id 15.20.5017.024; Wed, 23 Feb 2022 19:48:34 +0000 From: Joao Martins To: linux-mm@kvack.org Cc: Dan Williams , Vishal Verma , Matthew Wilcox , Jason Gunthorpe , Jane Chu , Muchun Song , Mike Kravetz , Andrew Morton , Jonathan Corbet , Christoph Hellwig , nvdimm@lists.linux.dev, linux-doc@vger.kernel.org, Joao Martins Subject: [PATCH v6 1/5] mm/sparse-vmemmap: add a pgmap argument to section activation Date: Wed, 23 Feb 2022 19:48:03 +0000 Message-Id: <20220223194807.12070-2-joao.m.martins@oracle.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220223194807.12070-1-joao.m.martins@oracle.com> References: <20220223194807.12070-1-joao.m.martins@oracle.com> X-ClientProxiedBy: LO4P123CA0392.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:18f::19) To BLAPR10MB4835.namprd10.prod.outlook.com (2603:10b6:208:331::11) Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: ea656f65-4428-4917-38d0-08d9f70579ea X-MS-TrafficTypeDiagnostic: BLAPR10MB4930:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Ptx/laEjfXPj3ZO0d+B1YL3YCkP9sTDGu8i6bugbJHr3OQPFN2Nsbi+wBLdsyXtpny2IIGU856gwXoP4pjpZxuchLaoG3d579TbjV5v9fIvaa1GFeO0rbiz63/atZ2LdCJ7iOKoNFiG95U6+IKBmQzANlGArk/ROk76WG7xZA44s2w05a7R49JUlz7q0ahFpiwsF1Nbxw4IZMb078qyjHKw3nK5f8vPWCPUZmhQmRsgbCl1COaKWmlj68L2Y+7SQnKqmvdvYoyOyCiV5RmuFp+fH5+EVInHXyMI8s7MJHnvimtmsLYuL+hrdlt29XseEGN28PwrrlAAE1urk3jT6DoKenuyLHcu1eqOya2qk4pF8lxAvMGDNeA+XDP5hK3uYKGXVOkHh6/3ezHPVGR6xGjAka5kg8/VdtdZW6WMpWAAfI6N5xB9nF6Qatih3CyyiOB0uSqRl1QxYWLbpFLa4J4Xw94LIxGgKtYcKHEPvaQL/3w4ZoK3hIzobixkJ03lrR+sj+/iBZAFp9nN3F13spN3ikzTy1x5jXlkfk0J17ilNDO0SKIZpGNXjaFJVYNSXOVW+gfBmFiBLHnqrwCHWqX5Xv/IotCNNAmbeCNWucMk5SSqeP1g1yZ3Yq7wFJGCQjRu9lJTxgxvAmsC1n0BWku4TLofea5JzU8yVoOCFCWvfSrCmCBdFujWnY7nHFK7U2aZXSbJ/H/mWPgjXJmrbV5hgM+40HPKHQZ3gHwQM6aIryAYOiV9tnS5e5p0QA92ozy3AwiK79IC5KLrXgg4jOBjOH0HjMSVY43C6+AeASLY= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB4835.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(83380400001)(103116003)(6916009)(36756003)(966005)(86362001)(6486002)(316002)(508600001)(54906003)(8676002)(2616005)(52116002)(107886003)(66946007)(66556008)(26005)(186003)(66476007)(1076003)(4326008)(8936002)(38100700002)(38350700002)(2906002)(5660300002)(6506007)(6512007)(6666004)(7416002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: u/+hSBRlg9/ZDtSRVmOYeG1kk2nAmWtSeM3DA1o71e6OpgPTuLdRAl7wrygs1UgUp1r8rQcvP0FScMEoKb7hGOPTH+SworljuBtpBdLyVvwbDlt8eaGKD9CvOggbqqpmMeXbIn5BLUZI2nHUYmiX1mRw72QFFqzM3iZhfC0gYdoc1KASnTmTjag6u6XzCqeuJ83R8aAmsm/+kcxJXTJUKVw6Os/Y469NGpOo2hFmjhFCEy1c6mN4exn6xuIAunMmWjQI1KlpZ2D1EO8ChsYk8TUNWMZyFkf1W2mIk3fs7L0pw/kwc2Dm6eGcLnR1FELtQYkcXrfJC7q1Jm+8YZrntS4cc3PUJKVE749/Q7pexg8FOd6dI2WuWh+HOqrRMZWCmnC6sJ4kof5+fJMdLwXZVM+M/WVp1EgGFbfe3UZoYrzk4+SUlkpQ/iR48jGV8l4wPExgWDnNxACM9S1DijboWvpTQLNV5rxh+2mmjl/xZ6AGEKptFKpgZ3A9HHTIqvLdWddN4qeJ5qFHE2MWINp6py+1chwGp6O432DZYQibm3s2LwwfoDXH8gIJCnMvYdWtYYzuxyQfenOyP6D1NPL+Epxqays/CtqKs2+F4ZQ8+crINur8WZRJVljNC+5jKKRB3+X/7p5HUthqD2VyIA7yCZAKzQRoFrozyaJe8gYBwjadUkLI4KDAtUiKx5IKrvkq0LCg/lhCh0dT0Qlo3lnYY4phwJMjMSJDd2gPAfhvUbppPv9gxlJxR4LOO+/zKbOXt6uyKA6e3B1U8ghywqT/+3MOwo7GtmtP8A8wejEbg8vdAr+SHSjv5sOVVqKvO3p4Pzpi5Pc0fTFVnuHem05J6zHq/Gvk56y3tatNl1db7gjPCPTU7koto5s59LNXvZiO5Bpdw8H7gU7djEPmbH1f432mf2k1D09ALo1RePC2gwHRfuPDRnTxJciIn2HWfcKOLUrbCzM9IUxfuJOnlHL3L1pzM3pw78hXLZgHSkofq4/hJ7y6Ncf8bIkHjL4058Fu7SPAXDtot9q/ITf+Z2El2AS9RbuAV70wwOnEc1F43UosqknPy7+gPMdJuJYpfsmSWXlTx1RDM4IlZuxOw291q0ldLt1pyyKnCVexTK8zOyrR3fFMWLfcY8/bs2L1s//v2/oDK5jjVWd2gHHGszVj5rvYnGCWT1wx2A7jbVtvQo8OIQqd0N0gPP+fi7ikre0iUzQE2TWD3VpV9mY6vOSTdf0B5ZemrndxlTyYM2S/Nq6JBrk+VZEc9BNpWfsNEh5CKvIgSA0eoMHzLiApQqr04F6niFounFCvb3bkzrJFBE2XF9XbFadaSfrBdUoLDLnk8+ICh5Gaw/4nQRfSLZS6auzMzlanbLybflrlcsu6uJPoNdpC6lEQ5InpII97WhVgFfKBxKkgQKmZXadw22AO0FUnI2mD4mcyh5nx1r3R0guIKZMww8PlO4w74qEB74fr7Qm29k1PSa3juS0dkh4whIsfnsVb0qa0Tv63DjTOtm8vJRd+cdWzbgRUuH0QgCDDXcoFRBGJs70o8covJvUSZ3Lsy91LKcpgbZ7CysuNEXVcyuNtbSNqQACfQJblpfuZRqGprFXo4D1rGPfxoSnU+UDfxW+/nEMprDdZZm0KGsFTnRZKVC9m/EPBg1ID2tC/cAJOGl2kntOOBCaEF19TyU2dYqpjhSgz9cSCrMFasFg= X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: ea656f65-4428-4917-38d0-08d9f70579ea X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB4835.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2022 19:48:34.3800 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: fKO49jJhYgmzErWBm/I7AW8mhQf+NXr4Jw5Cfb/+eeTXTDcSSpplmYW8pt+KlzkmWV9yxV5rEFrwwtB0KoojLOHlvjvAL0/BjRA0MaHqJQQ= X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR10MB4930 X-Proofpoint-Virus-Version: vendor=nai engine=6300 definitions=10267 signatures=681306 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 malwarescore=0 mlxlogscore=999 adultscore=0 bulkscore=0 phishscore=0 suspectscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2201110000 definitions=main-2202230111 X-Proofpoint-GUID: sPpJJFvvbeDLXd2Oo6g_FKY5HC2-TKYc X-Proofpoint-ORIG-GUID: sPpJJFvvbeDLXd2Oo6g_FKY5HC2-TKYc In support of using compound pages for devmap mappings, plumb the pgmap down to the vmemmap_populate implementation. Note that while altmap is retrievable from pgmap the memory hotplug code passes altmap without pgmap[*], so both need to be independently plumbed. So in addition to @altmap, pass @pgmap to sparse section populate functions namely: sparse_add_section section_activate populate_section_memmap __populate_section_memmap Passing @pgmap allows __populate_section_memmap() to both fetch the vmemmap_shift in which memmap metadata is created for and also to let sparse-vmemmap fetch pgmap ranges to co-relate to a given section and pick whether to just reuse tail pages from past onlined sections. While at it, fix the kdoc for @altmap for sparse_add_section(). [*] https://lore.kernel.org/linux-mm/20210319092635.6214-1-osalvador@suse.de/ Signed-off-by: Joao Martins Reviewed-by: Dan Williams --- include/linux/memory_hotplug.h | 5 ++++- include/linux/mm.h | 3 ++- mm/memory_hotplug.c | 3 ++- mm/sparse-vmemmap.c | 3 ++- mm/sparse.c | 26 ++++++++++++++++---------- 5 files changed, 26 insertions(+), 14 deletions(-) diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 1ce6f8044f1e..e0b2209ab71c 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -15,6 +15,7 @@ struct memory_block; struct memory_group; struct resource; struct vmem_altmap; +struct dev_pagemap; #ifdef CONFIG_HAVE_ARCH_NODEDATA_EXTENSION /* @@ -122,6 +123,7 @@ typedef int __bitwise mhp_t; struct mhp_params { struct vmem_altmap *altmap; pgprot_t pgprot; + struct dev_pagemap *pgmap; }; bool mhp_range_allowed(u64 start, u64 size, bool need_mapping); @@ -333,7 +335,8 @@ extern void remove_pfn_range_from_zone(struct zone *zone, unsigned long nr_pages); extern bool is_memblock_offlined(struct memory_block *mem); extern int sparse_add_section(int nid, unsigned long pfn, - unsigned long nr_pages, struct vmem_altmap *altmap); + unsigned long nr_pages, struct vmem_altmap *altmap, + struct dev_pagemap *pgmap); extern void sparse_remove_section(struct mem_section *ms, unsigned long pfn, unsigned long nr_pages, unsigned long map_offset, struct vmem_altmap *altmap); diff --git a/include/linux/mm.h b/include/linux/mm.h index 49692a64d645..5f549cf6a4e8 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3111,7 +3111,8 @@ int vmemmap_remap_alloc(unsigned long start, unsigned long end, void *sparse_buffer_alloc(unsigned long size); struct page * __populate_section_memmap(unsigned long pfn, - unsigned long nr_pages, int nid, struct vmem_altmap *altmap); + unsigned long nr_pages, int nid, struct vmem_altmap *altmap, + struct dev_pagemap *pgmap); pgd_t *vmemmap_pgd_populate(unsigned long addr, int node); p4d_t *vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node); pud_t *vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node); diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index aee69281dad6..2cc1c49a2be6 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -328,7 +328,8 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages, /* Select all remaining pages up to the next section boundary */ cur_nr_pages = min(end_pfn - pfn, SECTION_ALIGN_UP(pfn + 1) - pfn); - err = sparse_add_section(nid, pfn, cur_nr_pages, altmap); + err = sparse_add_section(nid, pfn, cur_nr_pages, altmap, + params->pgmap); if (err) break; cond_resched(); diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 8aecd6b3896c..c506f77cff23 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -641,7 +641,8 @@ int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end, } struct page * __meminit __populate_section_memmap(unsigned long pfn, - unsigned long nr_pages, int nid, struct vmem_altmap *altmap) + unsigned long nr_pages, int nid, struct vmem_altmap *altmap, + struct dev_pagemap *pgmap) { unsigned long start = (unsigned long) pfn_to_page(pfn); unsigned long end = start + nr_pages * sizeof(struct page); diff --git a/mm/sparse.c b/mm/sparse.c index 952f06d8f373..d2d76d158b39 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -427,7 +427,8 @@ static unsigned long __init section_map_size(void) } struct page __init *__populate_section_memmap(unsigned long pfn, - unsigned long nr_pages, int nid, struct vmem_altmap *altmap) + unsigned long nr_pages, int nid, struct vmem_altmap *altmap, + struct dev_pagemap *pgmap) { unsigned long size = section_map_size(); struct page *map = sparse_buffer_alloc(size); @@ -524,7 +525,7 @@ static void __init sparse_init_nid(int nid, unsigned long pnum_begin, break; map = __populate_section_memmap(pfn, PAGES_PER_SECTION, - nid, NULL); + nid, NULL, NULL); if (!map) { pr_err("%s: node[%d] memory map backing failed. Some memory will not be available.", __func__, nid); @@ -629,9 +630,10 @@ void offline_mem_sections(unsigned long start_pfn, unsigned long end_pfn) #ifdef CONFIG_SPARSEMEM_VMEMMAP static struct page * __meminit populate_section_memmap(unsigned long pfn, - unsigned long nr_pages, int nid, struct vmem_altmap *altmap) + unsigned long nr_pages, int nid, struct vmem_altmap *altmap, + struct dev_pagemap *pgmap) { - return __populate_section_memmap(pfn, nr_pages, nid, altmap); + return __populate_section_memmap(pfn, nr_pages, nid, altmap, pgmap); } static void depopulate_section_memmap(unsigned long pfn, unsigned long nr_pages, @@ -700,7 +702,8 @@ static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages) } #else struct page * __meminit populate_section_memmap(unsigned long pfn, - unsigned long nr_pages, int nid, struct vmem_altmap *altmap) + unsigned long nr_pages, int nid, struct vmem_altmap *altmap, + struct dev_pagemap *pgmap) { return kvmalloc_node(array_size(sizeof(struct page), PAGES_PER_SECTION), GFP_KERNEL, nid); @@ -823,7 +826,8 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages, } static struct page * __meminit section_activate(int nid, unsigned long pfn, - unsigned long nr_pages, struct vmem_altmap *altmap) + unsigned long nr_pages, struct vmem_altmap *altmap, + struct dev_pagemap *pgmap) { struct mem_section *ms = __pfn_to_section(pfn); struct mem_section_usage *usage = NULL; @@ -855,7 +859,7 @@ static struct page * __meminit section_activate(int nid, unsigned long pfn, if (nr_pages < PAGES_PER_SECTION && early_section(ms)) return pfn_to_page(pfn); - memmap = populate_section_memmap(pfn, nr_pages, nid, altmap); + memmap = populate_section_memmap(pfn, nr_pages, nid, altmap, pgmap); if (!memmap) { section_deactivate(pfn, nr_pages, altmap); return ERR_PTR(-ENOMEM); @@ -869,7 +873,8 @@ static struct page * __meminit section_activate(int nid, unsigned long pfn, * @nid: The node to add section on * @start_pfn: start pfn of the memory range * @nr_pages: number of pfns to add in the section - * @altmap: device page map + * @altmap: alternate pfns to allocate the memmap backing store + * @pgmap: alternate compound page geometry for devmap mappings * * This is only intended for hotplug. * @@ -883,7 +888,8 @@ static struct page * __meminit section_activate(int nid, unsigned long pfn, * * -ENOMEM - Out of memory. */ int __meminit sparse_add_section(int nid, unsigned long start_pfn, - unsigned long nr_pages, struct vmem_altmap *altmap) + unsigned long nr_pages, struct vmem_altmap *altmap, + struct dev_pagemap *pgmap) { unsigned long section_nr = pfn_to_section_nr(start_pfn); struct mem_section *ms; @@ -894,7 +900,7 @@ int __meminit sparse_add_section(int nid, unsigned long start_pfn, if (ret < 0) return ret; - memmap = section_activate(nid, start_pfn, nr_pages, altmap); + memmap = section_activate(nid, start_pfn, nr_pages, altmap, pgmap); if (IS_ERR(memmap)) return PTR_ERR(memmap); From patchwork Wed Feb 23 19:48:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joao Martins X-Patchwork-Id: 12757475 Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B1E896AB3 for ; Wed, 23 Feb 2022 19:49:03 +0000 (UTC) Received: from pps.filterd (m0246631.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 21NIDs0G001902; Wed, 23 Feb 2022 19:48:42 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-type : mime-version; s=corp-2021-07-09; bh=Ke+0t9/ky6qMXM0/4Y1jAe7ip7VHhbtZ6LYnYhQ8Pko=; b=GhQb6LV4L8eCXYT23zNBhnAVnF+QBW95Bita08dhgE9J2FnDpIbUewn1gSg7PBg4P95M RbJAyrtkVoxS2NQSR6gqpCLmua71achoMEPvS9PLHDhjs6PE0URSEhnQO6htlGebCIi7 S1Ah84JI/xQHwVu9LfTdJTBa1yr7W6HR/bRB/gj4vEEeGUSNJkFbV1i3K+Hab/16m9L1 Jy71FrSTbYTFngD7kwAxkRkNABsvbXclu6MhFqv44AdQG8HGO5b9seh5kdH08f/4DGSP QrCe5dGBxue7Fhfp+QKF04UBEa/ZjztWOnKpXqv6LGvfbHOtU+Kc3a0Hz8cDOxAIbZ/S Mg== Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70]) by mx0b-00069f02.pphosted.com with ESMTP id 3ecv6ew6hm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 23 Feb 2022 19:48:42 +0000 Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1]) by aserp3020.oracle.com (8.16.1.2/8.16.1.2) with SMTP id 21NJeWJ2047035; Wed, 23 Feb 2022 19:48:41 GMT Received: from nam10-bn7-obe.outbound.protection.outlook.com (mail-bn7nam10lp2105.outbound.protection.outlook.com [104.47.70.105]) by aserp3020.oracle.com with ESMTP id 3eb482vxks-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 23 Feb 2022 19:48:41 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mo4IukUdAJ/z4FzR4IZn6N1/K4nbNH6LcOao3Cc82iSLAbhWnURemgKCfClXwuzKHu89KkIeRgFVRGfjrq0DjObRxT/td1cMVEoQorX3WCw4oLFN/77Cz+ZsgXh6O9aQ0IO2DinLy1qw6WzJQPe+1v5RbfO30n5/8jAH9LlPewSJyF8IESrQDcmUymGIHA8GyxCW0nQtGsfyYDHqAsEn0jcxDII5E44RsLiAuCzQldkVZVFJfAdOouRPH2SVOGOlAKIBjBbdrz1DtQ7cQDNaRqjjpyakdp4AF45ELmbaTSAbu2EVMdWPzdIIcCXgJND08hWgvu9Th2Tg/FmLReEBlQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Ke+0t9/ky6qMXM0/4Y1jAe7ip7VHhbtZ6LYnYhQ8Pko=; b=HQ/CeWaep/0Vv+tTaGJCgr4JM3Z9h1mC7XqB/xWVqAAkf9XtoqL2lq0RwQDgoESue1WWvsmMIqo+oze1YvwUv4NeISMHgg1rVxJfa9/RwAG1sjR02MCCQO8/qi3Io8ChI8WUEEjqcMqwIXn7ZpZvcp6islrR9GpxYeqzKIF09T82p8HOu0zQEFqsHCFa56w4dqOG7sA1CuT9veL7PR9/iMVRkwgsin/Y8m7DfGKuph9HLkJkUPdjPZV+z1KLPFi6MmmE3DvV3kgOB0TcBuEctpc29XifhKixoc/tdeb5eBYroeU5HGs7kE+6KXIFRpsh8hLpiqEtDKsEbzMdqOwxhw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Ke+0t9/ky6qMXM0/4Y1jAe7ip7VHhbtZ6LYnYhQ8Pko=; b=KtCvrEA0IDK4USrrUJVJshNIfcy4CowXzheu7G5l2b5nBxp8WCqj3jkz4t6Pc8MXJL0osYZrQMLLQ0iuSRwX1AivLk8rIuUSM9XXDIwG591gUiRxA4Vhu1rW0RtCwBWDplS1jSDpdfdetRL4aeks+iurrkffOmZMGkjoXnuujkk= Received: from BLAPR10MB4835.namprd10.prod.outlook.com (2603:10b6:208:331::11) by BLAPR10MB4930.namprd10.prod.outlook.com (2603:10b6:208:323::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.21; Wed, 23 Feb 2022 19:48:37 +0000 Received: from BLAPR10MB4835.namprd10.prod.outlook.com ([fe80::750f:bf1d:1599:3406]) by BLAPR10MB4835.namprd10.prod.outlook.com ([fe80::750f:bf1d:1599:3406%6]) with mapi id 15.20.5017.024; Wed, 23 Feb 2022 19:48:37 +0000 From: Joao Martins To: linux-mm@kvack.org Cc: Dan Williams , Vishal Verma , Matthew Wilcox , Jason Gunthorpe , Jane Chu , Muchun Song , Mike Kravetz , Andrew Morton , Jonathan Corbet , Christoph Hellwig , nvdimm@lists.linux.dev, linux-doc@vger.kernel.org, Joao Martins Subject: [PATCH v6 2/5] mm/sparse-vmemmap: refactor core of vmemmap_populate_basepages() to helper Date: Wed, 23 Feb 2022 19:48:04 +0000 Message-Id: <20220223194807.12070-3-joao.m.martins@oracle.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220223194807.12070-1-joao.m.martins@oracle.com> References: <20220223194807.12070-1-joao.m.martins@oracle.com> X-ClientProxiedBy: LO4P123CA0392.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:18f::19) To BLAPR10MB4835.namprd10.prod.outlook.com (2603:10b6:208:331::11) Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 060b6405-7076-44a7-f7de-08d9f7057b6e X-MS-TrafficTypeDiagnostic: BLAPR10MB4930:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 9IB6QYXFvd1LC3uEsD2cP9+tBDy9Uq4rkedtoOjeteygv2+UKQBE4ZCicYDMWKSDEXPV2UBVJWpf9KU6L16Ri4m/cXxm+YxlGI56/pYjdwAezzI1fHK6zrvrUCTLAsvpIUQ4Che9Jt2/guENkd9kbK8nDo34fpiAEmvH7lL9Y9QaB9AxahTMHRVgjUZcCeSNZXaggpZUzzwxoOhFG2w8pfcm7MaUgZk7fPJZCPYs2PPg5HBgYtwQJ9sdSvlhOdVOEhHfxZz5bzbK1TLcJMew1ZMCrVNX7XuRci1z3OewxD1Y5lJigY/o8RP21nl/R5YcsnQF42pA6xLdp1VsWFu44wxBLbpmFQ+qHWNkihlwbVBIxMQvm4Vqfdsd9ywqxNL6iSgX0YY4lY1F/T2YBtZTF3H4ePjxuWwCzE7YTZuTu2tOei1I0E2cPQm6x94wBl8JwE8R5phS76imfIALpjb1goN3d81inkGXCvPce26X10OMsilbcukkQBDO2UOqunElr42G98dUu5fXUkK65miwkU01Q6DtkX8NhR1r4VSQq13IKniCXzihMRzLJwdOaxIWPXcnjnjskaAf0beU4ItDzGrYdju0MudQrFbaM10Nedgsm0f06Qah4mIZm5TWhz2fVkrGyV+hadsoLvq/zoO1Q6EC5gV+6J4/gbz0LRGR3aP6C/CFt0jIhxN2KLnE5eOpCLtQeMBMnx8SeA77UsxVRg== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB4835.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(83380400001)(103116003)(6916009)(36756003)(86362001)(6486002)(316002)(508600001)(54906003)(8676002)(2616005)(52116002)(107886003)(66946007)(66556008)(26005)(186003)(66476007)(1076003)(4326008)(8936002)(38100700002)(38350700002)(2906002)(5660300002)(6506007)(6512007)(6666004)(7416002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 3xYe6FfBG361SqCpFaE8ursLXgT0Weh3uDaBcdThSvgMf7tpCxOvW3lQi0GNLIK+InFZN806vxArGzsm+dkhBu0iTcef0hsjleb96LDLwtiD8P0py3gWx0BSKU1EgF2DV0a79zpXeGLWsvj+n73PSNKe8mzGpcxq21P++TbJmxvRUjTuyzQ+5vsmn1udRtY+viLTJ/jPHUAgPemo7uO0L1K54kgX+YMWS68YQjJSJM+dTc8sTiPLzvkQ4YFdNmQRvgfmUBxVGQfvX9qMFiic56Dx6TbQW4JGMFi+7AZwJO1hd0B0KmxwoICiaiaNbwJR2QS+txBeyWWAnIcQQOBRl1Xz1SoVr/nQ2lJyt8hvXWjKJo1galfg2AMlfV05OCqtUbE2nemS11iaro8qCHXJhRzWtGR/9SAqheQ4Re1KxnQ3Ox7HprxCJkkugPLDZ9loI6804L1F+eLQ7K9qeVAGqyIBtO5WgaCOuNz/EXeWcixSTlnVmUsbekCry+5Yx/S691GR75HW9VllvBuk9FB+Ii89wi36tSMvx8e+4M2wURGyzQDNf16prpqMgWZCZyRP8FwyejJQMHEAm6LKvGQfc+3aCnqdJw4FLf7lXpdnqtpUQYs0x4jl8RsTxD4HG9i4ctOS3ctxCmCf0gVNFBYbfWIiuJELbFyMouC5ZeMCBMAvrb9Nm8uKul4M6n07tgWo+e13npj/56vxtdxbNAfFdk7X49ALdy1vgkDoUz322y5gwr8uy0RM5u7rbPxI224XbUkWs4mnzpdmCzoJk4+aJ1/C+eQwMXGKLYd1vdIeWe7Z4keASUsNbx7X2P2I01OiJ+U07n9VcrUdk5DqDlqzpDOr/pIrOXNKc95vStNaLcHrk4OpnvKEwcJuqy4y+hy2p8aywanmSCmrFateEY3tkJkKmq060BQaIicl8ED9aMfZA85FKlnVkAgrQsbnv4PQjqRakD2tZqWg/c+swbrNeSd5uEjuZA5Y3JynHUMDQ03zfRmcSoHdKMaC17KeZh1GfA+UVAKgjTQxloh3z8pvF/wtuN6C6MmEO5idkLnoAPJfqSy555hQd4041K4j6xPse0qoDPpEvFweBOsOOK2cwaeyS1hodqgBM9FVrqRLMutR7QQ7iYZBoL82Q5GTlkXXJUWIGizGmCdIobZ/zB4YlmRkZwMRbaWUR/A2ADhcXdNCge++1xnz5MfaudNwe+6bCuDKwwt6BcSI7nQXPCWfNcJebIOHGq1H4ZunQxoJjf8+inh4gwoRG5s+4yyLJfSP1+Ngf9DPl3WiMvfEA+XvlYFAZYeBWK23x3e6a129FL4D7Jl3f5pQhiJaP3fIymHx8YXVGN5AuANZpAGorQxzIJHx9s3wY9rfZyfROST2N0xfLLMYZcHzrpa58WYH6RL4pv2nairfmuBaUnmtt2mzKqpzeTH/9CdC0+SPvquffctZ9KVvCDivjnguGEPrcRMlw4bZYD88q2DYARNlfo4hlpXRaNezqJLuC7ZrVQ5FE4pxhCgko4Rt7HT4ZRPkPqHC/kYHaCndTj6BdRT1rWu63/sSjmgErcCFvZJp6OV7SsHOJRHVEHFt8rXfUFWwB/N7gq8j2pJSRINJ18R2llrGT/2QB/yPdQfra/QdNokxHA/llBc6YEII0HNirX4nK0tG6BmIVRVThQokWC5ad+s4Iw== X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: 060b6405-7076-44a7-f7de-08d9f7057b6e X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB4835.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2022 19:48:36.9277 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: Ug4Lweo2bEIROmgIQ0AxkDCCPnh1JtnvjhQr39CLCoBPWqmxWVIgHLpUC2AwE0DxKGF6wMpeKKXT95ryX+m2QgKMocrRuD7a4aaaE4qxg+4= X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR10MB4930 X-Proofpoint-Virus-Version: vendor=nai engine=6300 definitions=10267 signatures=681306 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 malwarescore=0 mlxlogscore=839 adultscore=0 bulkscore=0 phishscore=0 suspectscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2201110000 definitions=main-2202230111 X-Proofpoint-GUID: OswpN3nvueKMxgM1Do0ybSKUB45RzCFe X-Proofpoint-ORIG-GUID: OswpN3nvueKMxgM1Do0ybSKUB45RzCFe In preparation for describing a memmap with compound pages, move the actual pte population logic into a separate function vmemmap_populate_address() and have vmemmap_populate_basepages() walk through all base pages it needs to populate. While doing that, change the helper to use a pte_t* as return value, rather than an hardcoded errno of 0 or -ENOMEM. Signed-off-by: Joao Martins Reviewed-by: Muchun Song --- mm/sparse-vmemmap.c | 46 ++++++++++++++++++++++++++++----------------- 1 file changed, 29 insertions(+), 17 deletions(-) diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index c506f77cff23..44cb77523003 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -608,33 +608,45 @@ pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node) return pgd; } -int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end, - int node, struct vmem_altmap *altmap) +static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int node, + struct vmem_altmap *altmap) { - unsigned long addr = start; pgd_t *pgd; p4d_t *p4d; pud_t *pud; pmd_t *pmd; pte_t *pte; + pgd = vmemmap_pgd_populate(addr, node); + if (!pgd) + return NULL; + p4d = vmemmap_p4d_populate(pgd, addr, node); + if (!p4d) + return NULL; + pud = vmemmap_pud_populate(p4d, addr, node); + if (!pud) + return NULL; + pmd = vmemmap_pmd_populate(pud, addr, node); + if (!pmd) + return NULL; + pte = vmemmap_pte_populate(pmd, addr, node, altmap); + if (!pte) + return NULL; + vmemmap_verify(pte, node, addr, addr + PAGE_SIZE); + + return pte; +} + +int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end, + int node, struct vmem_altmap *altmap) +{ + unsigned long addr = start; + pte_t *pte; + for (; addr < end; addr += PAGE_SIZE) { - pgd = vmemmap_pgd_populate(addr, node); - if (!pgd) - return -ENOMEM; - p4d = vmemmap_p4d_populate(pgd, addr, node); - if (!p4d) - return -ENOMEM; - pud = vmemmap_pud_populate(p4d, addr, node); - if (!pud) - return -ENOMEM; - pmd = vmemmap_pmd_populate(pud, addr, node); - if (!pmd) - return -ENOMEM; - pte = vmemmap_pte_populate(pmd, addr, node, altmap); + pte = vmemmap_populate_address(addr, node, altmap); if (!pte) return -ENOMEM; - vmemmap_verify(pte, node, addr, addr + PAGE_SIZE); } return 0; From patchwork Wed Feb 23 19:48:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joao Martins X-Patchwork-Id: 12757478 Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CB51F6AB7 for ; Wed, 23 Feb 2022 19:49:05 +0000 (UTC) Received: from pps.filterd (m0246632.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 21NIDlxa029467; Wed, 23 Feb 2022 19:48:43 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-type : mime-version; s=corp-2021-07-09; bh=rBZFKXBhUVcGaQee2IQyxW+Dj+TwY1lOwgsfsaR1Dgs=; b=BREPMQh1crcFtcQO9URpbYhXpUce2Z3aygW7Wgof85WszGSqUN9UCibCHr5FL/u23OG2 W7YgndYeCNTzgAzevJ84TKptrcNu53sZ6q/JicflmQb8m/Sy0eutoa0cvfn77VckIils vf0SvdiFkZ57zOqoY/J7DplaUlQJ1AsfpOg0Mht+m11hwfbEDnD/hdWnzHDXtMPQGWWY 9H81Jl0xErWTEhZzJ1dOBlPDTu41wYSdcWEAm2Jm4vZHw7NIZuZnk2IyhuuYKmkiWVby 7e92sdm84NFSOwXbEvX9HYAj03AbqG+6scEmBw2C7oXsMbCOYF3cqDpomy7t1zgUMNm1 kA== Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70]) by mx0b-00069f02.pphosted.com with ESMTP id 3ectsx5cp7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 23 Feb 2022 19:48:43 +0000 Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1]) by aserp3020.oracle.com (8.16.1.2/8.16.1.2) with SMTP id 21NJeWJ3047035; Wed, 23 Feb 2022 19:48:42 GMT Received: from nam10-bn7-obe.outbound.protection.outlook.com (mail-bn7nam10lp2105.outbound.protection.outlook.com [104.47.70.105]) by aserp3020.oracle.com with ESMTP id 3eb482vxks-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 23 Feb 2022 19:48:42 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IW6Zvml1imkwRCObekIY1W6OjdNxf2x+nGsuv9CNbabXqGbJg0mt6ie/7sZXokKwfYWU4D3RZer3/ToCNi/5gkOBEUdnJwQktrdXKuYrVzHXHgPpt3bXp9JDwAdGm7Pw91ddlTg1nITy3LY3CVh2k61dTiVJSStEG9O+x5GbkY3y/gkEP1QiJzHUp4vxc1PViBel2UMmhTs/A63+tLqYUX3hTMJagQFWZEZ+oMkf0LEN1IK8AexBgi+5X0HqoXtNQXITONxxkPHGCH4oXtOrZ8j4MXx8ZL5W0wbPAKsUcxH9TmZhV32SZcfZfQgOT/XJbRncVsEVaPEXw8Mva5GS1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=rBZFKXBhUVcGaQee2IQyxW+Dj+TwY1lOwgsfsaR1Dgs=; b=RRi9b8e0nVPODCSK9T2GTqwTTDQU5U05TMBHQs9wD2MBLdF6IiM6prIJno+CNOKt91pw1giNUfn0L8JHMrgRxUbG5EakwfY87dxLFZPsVZ4ITwCD1K2H94WYbIUx0fjRkdj0q3jCkhTbdhngU3cpGv6pqUF4FNQVZ4UniRwDvE/GCwiAT/zgJpuwrkx/eqoIT2lmTMfgEBtVUSRpF+2Rcol+8bZgqbyXopt5RKrPOeDH5oQrVm08HX7+bUSvlr/02xZRXyR/8a/DFS/aj+NqaIBhG2Oo7fiQya01Nd49hbLeD+xcCZGVa6+OBLqNv0evAKR9sQWqvGty6npBLAXEng== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=rBZFKXBhUVcGaQee2IQyxW+Dj+TwY1lOwgsfsaR1Dgs=; b=kAD25fuJHfHNRrkhpvyI2LViup2bqqWVIbfSgJ8ZpwxpEfoHRILIAu0dELF+KjzjMUXdEuKf/nLtP4LJCtXWvfB5ZVnq5WfLYKM3PUsezsZDPM3Gtykq3Gkq6MXXocEK6DMt+UdLbaJrW3c7roDJe7Djl73qjGYDlgeDHXcyAkc= Received: from BLAPR10MB4835.namprd10.prod.outlook.com (2603:10b6:208:331::11) by BLAPR10MB4930.namprd10.prod.outlook.com (2603:10b6:208:323::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.21; Wed, 23 Feb 2022 19:48:40 +0000 Received: from BLAPR10MB4835.namprd10.prod.outlook.com ([fe80::750f:bf1d:1599:3406]) by BLAPR10MB4835.namprd10.prod.outlook.com ([fe80::750f:bf1d:1599:3406%6]) with mapi id 15.20.5017.024; Wed, 23 Feb 2022 19:48:39 +0000 From: Joao Martins To: linux-mm@kvack.org Cc: Dan Williams , Vishal Verma , Matthew Wilcox , Jason Gunthorpe , Jane Chu , Muchun Song , Mike Kravetz , Andrew Morton , Jonathan Corbet , Christoph Hellwig , nvdimm@lists.linux.dev, linux-doc@vger.kernel.org, Joao Martins Subject: [PATCH v6 3/5] mm/hugetlb_vmemmap: move comment block to Documentation/vm Date: Wed, 23 Feb 2022 19:48:05 +0000 Message-Id: <20220223194807.12070-4-joao.m.martins@oracle.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220223194807.12070-1-joao.m.martins@oracle.com> References: <20220223194807.12070-1-joao.m.martins@oracle.com> X-ClientProxiedBy: LO4P123CA0392.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:18f::19) To BLAPR10MB4835.namprd10.prod.outlook.com (2603:10b6:208:331::11) Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: dfe22726-b0d1-46fd-b266-08d9f7057ccf X-MS-TrafficTypeDiagnostic: BLAPR10MB4930:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: /fN8kiOW4hTI9CUT6zOpwXVS420D3EYVyGeDgbrio87lq7FpdMDLlB/lCt3WKH7fMK+ZpPG8TGCJ5NqELq4UNdkyltVNEhSKbKVrPHNQMfFrpCiVXkJI9B22/hHjG5tY4xyklKetTrZPzoQpQGfhWUahKqPRftUUo8iCp6z9P1zrGApACcJ2eIZPq1tPPbTUh9/1cH8oRO8lElFS9d5YC1t0ouQ7owup+e2LN5agKFNEUmgoHi1Dm8rImOqfp0pJfzFigb37JU69tLZRLYvGNaxYyiRDcd4eLbAZ1IYOtEpQ7N+XQtbbLA9ADlWaCfFNWlkVXi5rBXR8Qx0YIcS5MZxKBKn66UDAfF24zdhLvQ+yJYJ9MpizNBOlDvPHf+hUjBOEjs+lrXgz72CMY+OorU+J0j7hIwcHbMj0i+FVcdmkLY4TQtb7YrumT8Zf6vkCE93Mqvo0eSI/5BAGxY6cPnwomdq2XaCxS6r45RHfOurNsprbOjofBlMNKxvuErfxHtvoJGXqePWZIusZwDfX5wX5P17FFmbD7hb5j+ewEmZZnije6qC1e0GwhVlg5fkKaIdUG4hHwoqxNCJxHxPs4JIJ/aAHTUIdvkdBPvatl+qree8aicU92cWDNDadAmTN4AFyv5kbyZWcQmRETqWXXzr5mdc0/txUHdCVoFBma8xJrQaRbrtXc/U4bVhYZYV1xY134AP06xNYoCsULrd9iQ== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB4835.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(83380400001)(103116003)(6916009)(36756003)(86362001)(6486002)(316002)(508600001)(54906003)(8676002)(2616005)(52116002)(107886003)(66946007)(66556008)(26005)(186003)(66476007)(1076003)(4326008)(8936002)(38100700002)(38350700002)(2906002)(5660300002)(30864003)(6506007)(6512007)(6666004)(7416002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: gGKAvXxiAAI99uRuJDcTDA681VpiXdO3yyuUMLhz8jv6bLT5UjTol7Rx6yXZ607dKuA/lEnFtVK0LUK8fWhgdd9vh8xESdrW1oiqOyKi0+b/HRcVqW+kdm7XU8RtOpfCjUDC5r/abypN2gf3kUjxetRqZzqAaWskDTGayKvPfjns9Q8RwiirC2m8vUsQpqq1CVqo+rPHlTiT2JQGyiPl8J/t9foj9j9yWgCWwrwE7fHDi1vbKugCJQOblVtkm1hbS340Kq599IzHirCKFstWw7atFM0w0qnlJ77MclSbuP3D5t6UwzVzbcjdjWFDWsF2QS9g4u7kB0JzydBOdEHYoUE1LpICLShfZkJLKEa+32kxjuVyInnfONJHi0XEmML6EEzTNNxGS/EncWNB5bnwRfCgHb9vGnbkdHMae7Q7VbeHKfqdcpM5euAD20XHXVjNHWsKxWKtSGzlTSf13+M4Fc8aArmr1b3heonMEJU0emTj1sXn81+lWXQIGp4q+a5W1HR92yCgZoJ367TEhfWEJ1Ahb6ajEGCSKtfyflt9BSnZvqbjLbl3O3ZOaWnOyGoYvQNZ5g9PhM45y39AOKfxTGA+bLokvyEn+hqDEMDlYjHk7c+tB+ZwHmGn3kRNR+cDtWcvyaN8665HCAbpLLd4UkKrrcRQFKjamkBzC9qIkz6dJd0wj9ck78sFdhbJmYm1+RW+yvyg4He/xbvKeM2Wv8bUFPmy20pKz6UKEd5Bbf5BTwYj3FD2ROL00xWQm1kTghGfsNvL5wOpyxSSnM51vnjn1pt5dzHZguLn0IBXF/GiYtPllB4YS2fM1IqhcrQDtEosqcv2ZldcwoWMWbd9pk+LVS39qYLNEIStvFI5/3tfXyw9rlt/UKe5P7D7a1T2IygytzXa4re0eEYq5a3ef7tBywRFf8Pc9j+T6NXNqyDRRrd18FVeal2LS1M8mXFamkgPo46RkvTv1+NykX2amBIkSbwnw21pWwk3aPCi3LtQVPxkPNS9HZgAEiNNao49BnYRIzJVKvKidUkqla8zSsQCVt0luECmvXgfoU78QQh3Zpg5P6d9XB+6X+HBF/qt5n1/r6FgFbk2l08J+V/8LoEpMp9wpjh+e9noXYDZ86Oy0ZpZ3Kr6miK+mEJg5qDMeMu0QuJagLb0YZFWKoVk3WCsA8xdHNff7ndXqU6MIpAMsdsnZiREFJOxPe7r1M2tAp+AuIQcI3L3UZZtfPgqyRx+hcHx9kH/DM8s5TbBwEb7Whx3DBMBYZNd0aqoQxsx+u+upoBQh+6c3+caQ0GVTtj1VlV3jOYInnq7KWQnBjSHZ1Yi9zlC74IOGaoO/vnL3aDzGakdeNxdF1hJM7DO+7KS+lPJs6aQhW05xmges9KUbwr1SEUzABzv7ji15GqDKwBt70lBXQkAJ/7Lm1JB4lp+tlfzGJGwiUqBNpDiiOBVntOmXzAzhtPR/fsxS8th3gLFkX8d1GrIjkVgzx5/F8yaAPbGa0Z7cTKBs2V1xjmf5qw4TtUVs9CioTGdKbeXK6W1ysaxBhgr4SWN7CO2+SSChuVvwhnVqSNi5/1ze4L/a+P1y0GcaanqS2p+MySu+/qMGWjqKAqTl0nPVUMn3d9s5fm9hsGocVli+PhXvteYwMbDFAGn3wU4ZxjHBHv9BSMGzNseyxtlyurhCfwA0w== X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: dfe22726-b0d1-46fd-b266-08d9f7057ccf X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB4835.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2022 19:48:39.3187 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: NRmS9n+Ln+em+RglDr08BU/ZfA5gcsLLzbpLAgYyXvnPsS5vpL76MAVtMyLhVjoSDSN2ZvSv8u/+/AttdCUkPQcPpHVOph9chvnSmw/011U= X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR10MB4930 X-Proofpoint-Virus-Version: vendor=nai engine=6300 definitions=10267 signatures=681306 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 malwarescore=0 mlxlogscore=999 adultscore=0 bulkscore=0 phishscore=0 suspectscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2201110000 definitions=main-2202230111 X-Proofpoint-ORIG-GUID: 5zL8iVcp4CuFuOoNBhOtPxZsPNuehyXT X-Proofpoint-GUID: 5zL8iVcp4CuFuOoNBhOtPxZsPNuehyXT In preparation for device-dax for using hugetlbfs compound page tail deduplication technique, move the comment block explanation into a common place in Documentation/vm. Cc: Muchun Song Cc: Mike Kravetz Suggested-by: Dan Williams Signed-off-by: Joao Martins Reviewed-by: Muchun Song Reviewed-by: Dan Williams --- Documentation/vm/index.rst | 1 + Documentation/vm/vmemmap_dedup.rst | 175 +++++++++++++++++++++++++++++ mm/hugetlb_vmemmap.c | 168 +-------------------------- 3 files changed, 177 insertions(+), 167 deletions(-) create mode 100644 Documentation/vm/vmemmap_dedup.rst diff --git a/Documentation/vm/index.rst b/Documentation/vm/index.rst index 44365c4574a3..2fb612bb72c9 100644 --- a/Documentation/vm/index.rst +++ b/Documentation/vm/index.rst @@ -37,5 +37,6 @@ algorithms. If you are looking for advice on simply allocating memory, see the transhuge unevictable-lru vmalloced-kernel-stacks + vmemmap_dedup z3fold zsmalloc diff --git a/Documentation/vm/vmemmap_dedup.rst b/Documentation/vm/vmemmap_dedup.rst new file mode 100644 index 000000000000..8143b2ce414d --- /dev/null +++ b/Documentation/vm/vmemmap_dedup.rst @@ -0,0 +1,175 @@ +.. SPDX-License-Identifier: GPL-2.0 + +.. _vmemmap_dedup: + +================================== +Free some vmemmap pages of HugeTLB +================================== + +The struct page structures (page structs) are used to describe a physical +page frame. By default, there is a one-to-one mapping from a page frame to +it's corresponding page struct. + +HugeTLB pages consist of multiple base page size pages and is supported by +many architectures. See hugetlbpage.rst in the Documentation directory for +more details. On the x86-64 architecture, HugeTLB pages of size 2MB and 1GB +are currently supported. Since the base page size on x86 is 4KB, a 2MB +HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of +4096 base pages. For each base page, there is a corresponding page struct. + +Within the HugeTLB subsystem, only the first 4 page structs are used to +contain unique information about a HugeTLB page. __NR_USED_SUBPAGE provides +this upper limit. The only 'useful' information in the remaining page structs +is the compound_head field, and this field is the same for all tail pages. + +By removing redundant page structs for HugeTLB pages, memory can be returned +to the buddy allocator for other uses. + +Different architectures support different HugeTLB pages. For example, the +following table is the HugeTLB page size supported by x86 and arm64 +architectures. Because arm64 supports 4k, 16k, and 64k base pages and +supports contiguous entries, so it supports many kinds of sizes of HugeTLB +page. + ++--------------+-----------+-----------------------------------------------+ +| Architecture | Page Size | HugeTLB Page Size | ++--------------+-----------+-----------+-----------+-----------+-----------+ +| x86-64 | 4KB | 2MB | 1GB | | | ++--------------+-----------+-----------+-----------+-----------+-----------+ +| | 4KB | 64KB | 2MB | 32MB | 1GB | +| +-----------+-----------+-----------+-----------+-----------+ +| arm64 | 16KB | 2MB | 32MB | 1GB | | +| +-----------+-----------+-----------+-----------+-----------+ +| | 64KB | 2MB | 512MB | 16GB | | ++--------------+-----------+-----------+-----------+-----------+-----------+ + +When the system boot up, every HugeTLB page has more than one struct page +structs which size is (unit: pages): + + struct_size = HugeTLB_Size / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE + +Where HugeTLB_Size is the size of the HugeTLB page. We know that the size +of the HugeTLB page is always n times PAGE_SIZE. So we can get the following +relationship. + + HugeTLB_Size = n * PAGE_SIZE + +Then, + + struct_size = n * PAGE_SIZE / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE + = n * sizeof(struct page) / PAGE_SIZE + +We can use huge mapping at the pud/pmd level for the HugeTLB page. + +For the HugeTLB page of the pmd level mapping, then + + struct_size = n * sizeof(struct page) / PAGE_SIZE + = PAGE_SIZE / sizeof(pte_t) * sizeof(struct page) / PAGE_SIZE + = sizeof(struct page) / sizeof(pte_t) + = 64 / 8 + = 8 (pages) + +Where n is how many pte entries which one page can contains. So the value of +n is (PAGE_SIZE / sizeof(pte_t)). + +This optimization only supports 64-bit system, so the value of sizeof(pte_t) +is 8. And this optimization also applicable only when the size of struct page +is a power of two. In most cases, the size of struct page is 64 bytes (e.g. +x86-64 and arm64). So if we use pmd level mapping for a HugeTLB page, the +size of struct page structs of it is 8 page frames which size depends on the +size of the base page. + +For the HugeTLB page of the pud level mapping, then + + struct_size = PAGE_SIZE / sizeof(pmd_t) * struct_size(pmd) + = PAGE_SIZE / 8 * 8 (pages) + = PAGE_SIZE (pages) + +Where the struct_size(pmd) is the size of the struct page structs of a +HugeTLB page of the pmd level mapping. + +E.g.: A 2MB HugeTLB page on x86_64 consists in 8 page frames while 1GB +HugeTLB page consists in 4096. + +Next, we take the pmd level mapping of the HugeTLB page as an example to +show the internal implementation of this optimization. There are 8 pages +struct page structs associated with a HugeTLB page which is pmd mapped. + +Here is how things look before optimization. + + HugeTLB struct pages(8 pages) page frame(8 pages) + +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + | | | 0 | -------------> | 0 | + | | +-----------+ +-----------+ + | | | 1 | -------------> | 1 | + | | +-----------+ +-----------+ + | | | 2 | -------------> | 2 | + | | +-----------+ +-----------+ + | | | 3 | -------------> | 3 | + | | +-----------+ +-----------+ + | | | 4 | -------------> | 4 | + | PMD | +-----------+ +-----------+ + | level | | 5 | -------------> | 5 | + | mapping | +-----------+ +-----------+ + | | | 6 | -------------> | 6 | + | | +-----------+ +-----------+ + | | | 7 | -------------> | 7 | + | | +-----------+ +-----------+ + | | + | | + | | + +-----------+ + +The value of page->compound_head is the same for all tail pages. The first +page of page structs (page 0) associated with the HugeTLB page contains the 4 +page structs necessary to describe the HugeTLB. The only use of the remaining +pages of page structs (page 1 to page 7) is to point to page->compound_head. +Therefore, we can remap pages 1 to 7 to page 0. Only 1 page of page structs +will be used for each HugeTLB page. This will allow us to free the remaining +7 pages to the buddy allocator. + +Here is how things look after remapping. + + HugeTLB struct pages(8 pages) page frame(8 pages) + +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + | | | 0 | -------------> | 0 | + | | +-----------+ +-----------+ + | | | 1 | ---------------^ ^ ^ ^ ^ ^ ^ + | | +-----------+ | | | | | | + | | | 2 | -----------------+ | | | | | + | | +-----------+ | | | | | + | | | 3 | -------------------+ | | | | + | | +-----------+ | | | | + | | | 4 | ---------------------+ | | | + | PMD | +-----------+ | | | + | level | | 5 | -----------------------+ | | + | mapping | +-----------+ | | + | | | 6 | -------------------------+ | + | | +-----------+ | + | | | 7 | ---------------------------+ + | | +-----------+ + | | + | | + | | + +-----------+ + +When a HugeTLB is freed to the buddy system, we should allocate 7 pages for +vmemmap pages and restore the previous mapping relationship. + +For the HugeTLB page of the pud level mapping. It is similar to the former. +We also can use this approach to free (PAGE_SIZE - 1) vmemmap pages. + +Apart from the HugeTLB page of the pmd/pud level mapping, some architectures +(e.g. aarch64) provides a contiguous bit in the translation table entries +that hints to the MMU to indicate that it is one of a contiguous set of +entries that can be cached in a single TLB entry. + +The contiguous bit is used to increase the mapping size at the pmd and pte +(last) level. So this type of HugeTLB page can be optimized only when its +size of the struct page structs is greater than 1 page. + +Notice: The head vmemmap page is not freed to the buddy allocator and all +tail vmemmap pages are mapped to the head vmemmap page frame. So we can see +more than one struct page struct with PG_head (e.g. 8 per 2 MB HugeTLB page) +associated with each HugeTLB page. The compound_head() can handle this +correctly (more details refer to the comment above compound_head()). diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 791626983c2e..dbaa837b19c6 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -6,173 +6,7 @@ * * Author: Muchun Song * - * The struct page structures (page structs) are used to describe a physical - * page frame. By default, there is a one-to-one mapping from a page frame to - * it's corresponding page struct. - * - * HugeTLB pages consist of multiple base page size pages and is supported by - * many architectures. See hugetlbpage.rst in the Documentation directory for - * more details. On the x86-64 architecture, HugeTLB pages of size 2MB and 1GB - * are currently supported. Since the base page size on x86 is 4KB, a 2MB - * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of - * 4096 base pages. For each base page, there is a corresponding page struct. - * - * Within the HugeTLB subsystem, only the first 4 page structs are used to - * contain unique information about a HugeTLB page. __NR_USED_SUBPAGE provides - * this upper limit. The only 'useful' information in the remaining page structs - * is the compound_head field, and this field is the same for all tail pages. - * - * By removing redundant page structs for HugeTLB pages, memory can be returned - * to the buddy allocator for other uses. - * - * Different architectures support different HugeTLB pages. For example, the - * following table is the HugeTLB page size supported by x86 and arm64 - * architectures. Because arm64 supports 4k, 16k, and 64k base pages and - * supports contiguous entries, so it supports many kinds of sizes of HugeTLB - * page. - * - * +--------------+-----------+-----------------------------------------------+ - * | Architecture | Page Size | HugeTLB Page Size | - * +--------------+-----------+-----------+-----------+-----------+-----------+ - * | x86-64 | 4KB | 2MB | 1GB | | | - * +--------------+-----------+-----------+-----------+-----------+-----------+ - * | | 4KB | 64KB | 2MB | 32MB | 1GB | - * | +-----------+-----------+-----------+-----------+-----------+ - * | arm64 | 16KB | 2MB | 32MB | 1GB | | - * | +-----------+-----------+-----------+-----------+-----------+ - * | | 64KB | 2MB | 512MB | 16GB | | - * +--------------+-----------+-----------+-----------+-----------+-----------+ - * - * When the system boot up, every HugeTLB page has more than one struct page - * structs which size is (unit: pages): - * - * struct_size = HugeTLB_Size / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE - * - * Where HugeTLB_Size is the size of the HugeTLB page. We know that the size - * of the HugeTLB page is always n times PAGE_SIZE. So we can get the following - * relationship. - * - * HugeTLB_Size = n * PAGE_SIZE - * - * Then, - * - * struct_size = n * PAGE_SIZE / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE - * = n * sizeof(struct page) / PAGE_SIZE - * - * We can use huge mapping at the pud/pmd level for the HugeTLB page. - * - * For the HugeTLB page of the pmd level mapping, then - * - * struct_size = n * sizeof(struct page) / PAGE_SIZE - * = PAGE_SIZE / sizeof(pte_t) * sizeof(struct page) / PAGE_SIZE - * = sizeof(struct page) / sizeof(pte_t) - * = 64 / 8 - * = 8 (pages) - * - * Where n is how many pte entries which one page can contains. So the value of - * n is (PAGE_SIZE / sizeof(pte_t)). - * - * This optimization only supports 64-bit system, so the value of sizeof(pte_t) - * is 8. And this optimization also applicable only when the size of struct page - * is a power of two. In most cases, the size of struct page is 64 bytes (e.g. - * x86-64 and arm64). So if we use pmd level mapping for a HugeTLB page, the - * size of struct page structs of it is 8 page frames which size depends on the - * size of the base page. - * - * For the HugeTLB page of the pud level mapping, then - * - * struct_size = PAGE_SIZE / sizeof(pmd_t) * struct_size(pmd) - * = PAGE_SIZE / 8 * 8 (pages) - * = PAGE_SIZE (pages) - * - * Where the struct_size(pmd) is the size of the struct page structs of a - * HugeTLB page of the pmd level mapping. - * - * E.g.: A 2MB HugeTLB page on x86_64 consists in 8 page frames while 1GB - * HugeTLB page consists in 4096. - * - * Next, we take the pmd level mapping of the HugeTLB page as an example to - * show the internal implementation of this optimization. There are 8 pages - * struct page structs associated with a HugeTLB page which is pmd mapped. - * - * Here is how things look before optimization. - * - * HugeTLB struct pages(8 pages) page frame(8 pages) - * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ - * | | | 0 | -------------> | 0 | - * | | +-----------+ +-----------+ - * | | | 1 | -------------> | 1 | - * | | +-----------+ +-----------+ - * | | | 2 | -------------> | 2 | - * | | +-----------+ +-----------+ - * | | | 3 | -------------> | 3 | - * | | +-----------+ +-----------+ - * | | | 4 | -------------> | 4 | - * | PMD | +-----------+ +-----------+ - * | level | | 5 | -------------> | 5 | - * | mapping | +-----------+ +-----------+ - * | | | 6 | -------------> | 6 | - * | | +-----------+ +-----------+ - * | | | 7 | -------------> | 7 | - * | | +-----------+ +-----------+ - * | | - * | | - * | | - * +-----------+ - * - * The value of page->compound_head is the same for all tail pages. The first - * page of page structs (page 0) associated with the HugeTLB page contains the 4 - * page structs necessary to describe the HugeTLB. The only use of the remaining - * pages of page structs (page 1 to page 7) is to point to page->compound_head. - * Therefore, we can remap pages 1 to 7 to page 0. Only 1 page of page structs - * will be used for each HugeTLB page. This will allow us to free the remaining - * 7 pages to the buddy allocator. - * - * Here is how things look after remapping. - * - * HugeTLB struct pages(8 pages) page frame(8 pages) - * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ - * | | | 0 | -------------> | 0 | - * | | +-----------+ +-----------+ - * | | | 1 | ---------------^ ^ ^ ^ ^ ^ ^ - * | | +-----------+ | | | | | | - * | | | 2 | -----------------+ | | | | | - * | | +-----------+ | | | | | - * | | | 3 | -------------------+ | | | | - * | | +-----------+ | | | | - * | | | 4 | ---------------------+ | | | - * | PMD | +-----------+ | | | - * | level | | 5 | -----------------------+ | | - * | mapping | +-----------+ | | - * | | | 6 | -------------------------+ | - * | | +-----------+ | - * | | | 7 | ---------------------------+ - * | | +-----------+ - * | | - * | | - * | | - * +-----------+ - * - * When a HugeTLB is freed to the buddy system, we should allocate 7 pages for - * vmemmap pages and restore the previous mapping relationship. - * - * For the HugeTLB page of the pud level mapping. It is similar to the former. - * We also can use this approach to free (PAGE_SIZE - 1) vmemmap pages. - * - * Apart from the HugeTLB page of the pmd/pud level mapping, some architectures - * (e.g. aarch64) provides a contiguous bit in the translation table entries - * that hints to the MMU to indicate that it is one of a contiguous set of - * entries that can be cached in a single TLB entry. - * - * The contiguous bit is used to increase the mapping size at the pmd and pte - * (last) level. So this type of HugeTLB page can be optimized only when its - * size of the struct page structs is greater than 1 page. - * - * Notice: The head vmemmap page is not freed to the buddy allocator and all - * tail vmemmap pages are mapped to the head vmemmap page frame. So we can see - * more than one struct page struct with PG_head (e.g. 8 per 2 MB HugeTLB page) - * associated with each HugeTLB page. The compound_head() can handle this - * correctly (more details refer to the comment above compound_head()). + * See Documentation/vm/vmemmap_dedup.rst */ #define pr_fmt(fmt) "HugeTLB: " fmt From patchwork Wed Feb 23 19:48:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joao Martins X-Patchwork-Id: 12757477 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 88E266AA3 for ; Wed, 23 Feb 2022 19:49:05 +0000 (UTC) Received: from pps.filterd (m0246617.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 21NIDxDp011743; Wed, 23 Feb 2022 19:48:46 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-type : mime-version; s=corp-2021-07-09; bh=/BXFLbRSZu6RiVtnZuU4vmvBE1DH20dVRtATnKApp4E=; b=1K9ZJpoFDy3+M9vHjvCxf05N359VyLPZ53sjq32dO7XagFqV3kGDIqJNtRsQfqTRVK4d ZUWTYn9jVqDjJJOlkQClliaCMzX+UZ5xMAvJQzij7LcewQ5xJ9CTYcjwLUpLxYbpsRVR PFHM4E0P6kS1IejBZHUtfDB4CEpZ4KMH2x8TQt5vedGJF5YBaz6or8tDNbUeFFJd5LW2 5JEYJPQshq0tqEh4Ulw+N42OLr7H/HOZ4dCq/MBXddC0pl7LrKLiggmSRAsv6qRSAbOJ cGL0M0D4EAjsUUP8xBd+Pu0IwnU0Zi3P46NbZ3p56raAV5rWGuvWndAoxXiyC+/pc+5w IA== Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80]) by mx0b-00069f02.pphosted.com with ESMTP id 3ecxfavmyg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 23 Feb 2022 19:48:46 +0000 Received: from pps.filterd (userp3030.oracle.com [127.0.0.1]) by userp3030.oracle.com (8.16.1.2/8.16.1.2) with SMTP id 21NJfcRa055196; Wed, 23 Feb 2022 19:48:45 GMT Received: from nam10-bn7-obe.outbound.protection.outlook.com (mail-bn7nam10lp2106.outbound.protection.outlook.com [104.47.70.106]) by userp3030.oracle.com with ESMTP id 3eannwbuvm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 23 Feb 2022 19:48:45 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fk0m2vMw9W/sjJqCUderio6JGUI7pbKQ0TNPEmT2FVu+1plJPZKbefWRjfKpK7hgs6Vs1FSP/MGoVEhAwCktgSuRq+pkagX3HZmvTdA3KJITkKuy/SAzAimFryiu1jRZv6bztBddpvB/gZiyHRrc8wrU+a2UZSiHY4nqC3FHM5aSwVOdwUu0Qpg32oV0RZVarDbm4DIZYNomw7l9tVfPONJyUAs4ggC9h/NfLFNdWp/j1nkptFMNQJ6WspIQTzdkEZIUPskHkTjwftHzNBMSZlzMwspz033VjvBEyTikb2mua5TsQh19mN75qoOA71OaQRWVsH6542IdB/l3pQAoZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/BXFLbRSZu6RiVtnZuU4vmvBE1DH20dVRtATnKApp4E=; b=L4vyo7rGzZZaVG5as79nOyxzYEV8J8glHmcYxE65OS0bu4SjViASotvMe23+AcC60AmmHl6Kr2ufr45t/3az/iZ9FcOIl+PYIamMe7wXPLFyIWrWC5Pm20cUQpQjAi+Iy2WOmz4A9OnCuLTi4lvYipGJBrr8TN/JSQrcFQTGQBFcsxI/1TXEGovrNfydBPEFse2A7QdMsbBdkqnheIuV13N0agbmWJfh4OMVbRHhs6N+zWusvdwIhxE9LLLwmZihkJSKXobkoLmjICfmCCkqahj/+lEJy3XBpRTHvZUa0HYoCp0bGEDfYJtfFCe/pisSVOr4HVr2hOBfC4FbdB6gZg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/BXFLbRSZu6RiVtnZuU4vmvBE1DH20dVRtATnKApp4E=; b=HN58N43DcMcb2SLhQOW5we5b/PEp10U1Oj6K5VL6+HCwj8aclW61WtbxMBMcoMyvdYIwAUkurMtueps3tBb3+4p5OYFvW/JGB+Q4iCPZchj8Zk1FtCUEPL2DBmZgQ1KW6GWAlliKtEoAGMErURrVwU4b0DBPSEKK76zLZj6bEJs= Received: from BLAPR10MB4835.namprd10.prod.outlook.com (2603:10b6:208:331::11) by BLAPR10MB4930.namprd10.prod.outlook.com (2603:10b6:208:323::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.21; Wed, 23 Feb 2022 19:48:42 +0000 Received: from BLAPR10MB4835.namprd10.prod.outlook.com ([fe80::750f:bf1d:1599:3406]) by BLAPR10MB4835.namprd10.prod.outlook.com ([fe80::750f:bf1d:1599:3406%6]) with mapi id 15.20.5017.024; Wed, 23 Feb 2022 19:48:42 +0000 From: Joao Martins To: linux-mm@kvack.org Cc: Dan Williams , Vishal Verma , Matthew Wilcox , Jason Gunthorpe , Jane Chu , Muchun Song , Mike Kravetz , Andrew Morton , Jonathan Corbet , Christoph Hellwig , nvdimm@lists.linux.dev, linux-doc@vger.kernel.org, Joao Martins Subject: [PATCH v6 4/5] mm/sparse-vmemmap: improve memory savings for compound devmaps Date: Wed, 23 Feb 2022 19:48:06 +0000 Message-Id: <20220223194807.12070-5-joao.m.martins@oracle.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220223194807.12070-1-joao.m.martins@oracle.com> References: <20220223194807.12070-1-joao.m.martins@oracle.com> X-ClientProxiedBy: LO4P123CA0392.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:18f::19) To BLAPR10MB4835.namprd10.prod.outlook.com (2603:10b6:208:331::11) Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: d317b2b6-5df2-413c-7c8a-08d9f7057e4d X-MS-TrafficTypeDiagnostic: BLAPR10MB4930:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 2+Y5zJMpKgWVy05a81iIk/xgdygRUIlu2ismx3ByhQtDDlaERnttGJWJ4jOnqhy7v/FIRqwcuMxO1tgSShGMK+/wQc/ChQ3qE594T52IbU8X71cBBVDJ+94Tn5ib3Je1XBfPN9L/GAvQd20BZxuwaHd5IPGw0mxMvpyN0HIWIR5NEToRiTf4OdSzGPbRsb5MrPW2YqWSRN0pOu6ZKJQbVxXlYPeU1bF7pCjXiedeVQQlRoVmX8hiznJ+kW9GqMZ/s0i5OBRYGQDLnm4AAeO670K/ubshfGCWiiPx0mQUqBDua3zw74t2tkDRwxqQsXyl2rnzjUZ/wNksyimpjcB0zuoJAgxwWDTtZi4O49ay8UuYmZz4lSRpQWlGG/Of942ktl6puXYdrPrkF5lV88MykdWwTzj19iLO3Lm/1QMep1jKBncdL9SO1BJkTlZFaLiI8zfzLq8fY/NgiJX2pGuc9ZIqgojETv01Z4vLVcP2Nu+KOjglEaz7kiDPkLSI90n9WtVf2BgXJLGLQBdW20O8a4ny6XdqD/HLp2/ca7pwLmldvxEBK8sR+B0XbrZly55HUjqadbt5iHhSfzu4Ty9313c3pwGX3OpoRmKBhlCq41foJFMiQrhNAUMq7po6Box7HvyMk+7CrPShQy27exoO7uw2EzmmV+BhYS27p9AwpjYT1kGDyzBPpN6pTU/P2o/y5DgURpS2CdoNhmxTVIDXfZ9o/AZ7oXojA8kH5oIrzQ0= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB4835.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(83380400001)(103116003)(6916009)(36756003)(86362001)(6486002)(316002)(508600001)(54906003)(8676002)(2616005)(52116002)(107886003)(66946007)(66556008)(26005)(186003)(66476007)(1076003)(4326008)(8936002)(38100700002)(38350700002)(2906002)(5660300002)(30864003)(6506007)(6512007)(6666004)(7416002)(25903002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: Fh3JBBKK7eQgpnVOJoEcHaVuGMH5R7hFQppQI7J6FtrtNfwgC1juMuDscw78JK39MFQmr5uDBOW7y0oxgMjVuHs5b9bYmajTcYERdU2uryxriU0bCRewxCcgd+iFDnkewK77sQ0/1CypK8rOmTRtedSTmiAZwk6GXZaQlmQnW8xM/K4xEpoW5Zk+FZg1q+DwZLnUtdd9H0oumTHISwSAYKUU//IhwLwoXE0x7ZxxYbJoXOvd0getQ37kIV2eftHM6F4X6Po9EgLAoUUlQYSGq5wJAzqu7Ek/Gd6KFJhIx1hKNvi93exAiZWbr+ibCQcNNqsG94QDOEq0HLcwFwVXiCgIkLjnNE6ov8CfPmGk9aWbnkqq+MkP7OzejHh5G/CXVH9f5h022N0mcLuclSE4o/H2WL1+YIalHOhFUM2CLkMkSp2Yikgnh/CpKPvnl/+EFfPZmMFo75URPcSJpHxCZKfbJ+ToslWvlqtv7ky2vfHAEXAQpUMwvzycwYFK7/+K80zDPSOsRypSdu9/dBqKKYAK56uxczMH2/Oew/l1NS0Y+kE3YbnJRdHFnpVA9wTRbYcgykAqXgbQzxdPdjzwOCQgOZKjVlpREJ15jmKQnnSNtQxF4+DMXQrYkIXAh0JycbknWroIKjYp/fj++dv5wwArzJG9C7f7G8Oz+HmFuw0/Rxs0JvykBGTFn/cq03Um30RENs3e+hvPAwoXZbykcw7Sl45XCZbJRX0oJqqASBWHB9CxoQ3/4huBikugXbZuFZ6nOQz+yykBGgZ0O5cj8ijhE9XrOJJRFFi/0oNpkV2yKYGPJyfSF529DTWrXGCTKDkLR3bNww0n5UOctpo+nlWDq3b8KdjIdQQDaq/JqL7Ap5tO9nEaZwnKBejGKWkU68yPqA4IG309ffRsjT9D5zeADSIg74JaKDfp+7QgPvfBz7+qxs6gvtqT5JedQNKp4ix7cbOBw6jCTiCP8LWYmm0zDNXqiY8Tbju/PsXiQvxBDBtTSxRq0pEcowTGoX6JXaUEtvVpHvaKGDDrPP9o5e+Y4zX9MRZ0c1P0hywyzSNpYJdd9Hq2KtyzhZeiLHX5VBI7Swrg3/MGbPDsl3wqHvZk5jjBtrQV8SZOUKcveN8gHkc51PEvRt6iHxr7YNuqrhzhKeDsan0roUMfwuTh961Lc2Ns3vMzs8AWccrVoFFkdvrhF4mG7JOj8fS7OD8Ig2uknya3GAvaw1r2OP1U3R9xAsBWzWd/NREe5eu+9GeNRFqb9TRaZVrNMJcgvzSEzXyaDoSIgiz1UYqwr1A6tPvoTLJCdUW7PzqfiirNYV9egns9ZZD4HwbC/Ffh3KBHsgjxfNgWSlaWsvTraP3ScA4EDqXPbZull8HfKVSI2QjbdPX7eoj+/KqcPlNgjcrkQN2eZsLq4iGEt3MfhqVGykpibGbj0ofD3TnNvvaj+n2eZ/84gKEX7a+CZuV3BrNlFzpZGmHU5KD+DuCbxmzLHrhzFQ3HdPu1JbdJzUwYv4QHOjMp+lOP+SlKmKGN2Ty/DbFw0WSaxC+ZRXjYYJPEOBvZ8zZs91SQi6gUND4p2Ok3u9FmZXhT/OxCubJomfO/ZhLqhsf2D1c7AJmmmbowtGLGh02F31pW96MGk13RyfYePK2fEOCPJHdHUF5w4jShQZJLaP1CRdcdOKIGn3GDuA== X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: d317b2b6-5df2-413c-7c8a-08d9f7057e4d X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB4835.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2022 19:48:42.0227 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 9qVa4JAfzdgFC6teLklSglFAn/dU1Knejh6AEFbY76jjpl+17IrFaJvxkBDhdOaJBDGUEUY+4XWlGeNcBS/1Uk3xDbTtXnQ7P2gkVBJZOI8= X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR10MB4930 X-Proofpoint-Virus-Version: vendor=nai engine=6300 definitions=10267 signatures=681306 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=0 adultscore=0 malwarescore=0 phishscore=0 mlxlogscore=999 mlxscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2201110000 definitions=main-2202230111 X-Proofpoint-GUID: wqKIJU2aoAsu17ntauuGf5g00EyFctjG X-Proofpoint-ORIG-GUID: wqKIJU2aoAsu17ntauuGf5g00EyFctjG A compound devmap is a dev_pagemap with @vmemmap_shift > 0 and it means that pages are mapped at a given huge page alignment and utilize uses compound pages as opposed to order-0 pages. Take advantage of the fact that most tail pages look the same (except the first two) to minimize struct page overhead. Allocate a separate page for the vmemmap area which contains the head page and separate for the next 64 pages. The rest of the subsections then reuse this tail vmemmap page to initialize the rest of the tail pages. Sections are arch-dependent (e.g. on x86 it's 64M, 128M or 512M) and when initializing compound devmap with big enough @vmemmap_shift (e.g. 1G PUD) it may cross multiple sections. The vmemmap code needs to consult @pgmap so that multiple sections that all map the same tail data can refer back to the first copy of that data for a given gigantic page. On compound devmaps with 2M align, this mechanism lets 6 pages be saved out of the 8 necessary PFNs necessary to set the subsection's 512 struct pages being mapped. On a 1G compound devmap it saves 4094 pages. Altmap isn't supported yet, given various restrictions in altmap pfn allocator, thus fallback to the already in use vmemmap_populate(). It is worth noting that altmap for devmap mappings was there to relieve the pressure of inordinate amounts of memmap space to map terabytes of pmem. With compound pages the motivation for altmaps for pmem gets reduced. Signed-off-by: Joao Martins --- Documentation/vm/vmemmap_dedup.rst | 56 +++++++++++- include/linux/mm.h | 2 +- mm/memremap.c | 1 + mm/sparse-vmemmap.c | 141 +++++++++++++++++++++++++++-- 4 files changed, 188 insertions(+), 12 deletions(-) diff --git a/Documentation/vm/vmemmap_dedup.rst b/Documentation/vm/vmemmap_dedup.rst index 8143b2ce414d..de958bbbf78c 100644 --- a/Documentation/vm/vmemmap_dedup.rst +++ b/Documentation/vm/vmemmap_dedup.rst @@ -2,9 +2,12 @@ .. _vmemmap_dedup: -================================== -Free some vmemmap pages of HugeTLB -================================== +========================================= +A vmemmap diet for HugeTLB and Device DAX +========================================= + +HugeTLB +======= The struct page structures (page structs) are used to describe a physical page frame. By default, there is a one-to-one mapping from a page frame to @@ -173,3 +176,50 @@ tail vmemmap pages are mapped to the head vmemmap page frame. So we can see more than one struct page struct with PG_head (e.g. 8 per 2 MB HugeTLB page) associated with each HugeTLB page. The compound_head() can handle this correctly (more details refer to the comment above compound_head()). + +Device DAX +========== + +The device-dax interface uses the same tail deduplication technique explained +in the previous chapter, except when used with the vmemmap in +the device (altmap). + +The following page sizes are supported in DAX: PAGE_SIZE (4K on x86_64), +PMD_SIZE (2M on x86_64) and PUD_SIZE (1G on x86_64). + +The differences with HugeTLB are relatively minor. + +It only use 3 page structs for storing all information as opposed +to 4 on HugeTLB pages. + +There's no remapping of vmemmap given that device-dax memory is not part of +System RAM ranges initialized at boot. Thus the tail page deduplication +happens at a later stage when we populate the sections. HugeTLB reuses the +the head vmemmap page representing, whereas device-dax reuses the tail +vmemmap page. This results in only half of the savings compared to HugeTLB. + +Deduplicated tail pages are not mapped read-only. + +Here's how things look like on device-dax after the sections are populated: + + +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + | | | 0 | -------------> | 0 | + | | +-----------+ +-----------+ + | | | 1 | -------------> | 1 | + | | +-----------+ +-----------+ + | | | 2 | ----------------^ ^ ^ ^ ^ ^ + | | +-----------+ | | | | | + | | | 3 | ------------------+ | | | | + | | +-----------+ | | | | + | | | 4 | --------------------+ | | | + | PMD | +-----------+ | | | + | level | | 5 | ----------------------+ | | + | mapping | +-----------+ | | + | | | 6 | ------------------------+ | + | | +-----------+ | + | | | 7 | --------------------------+ + | | +-----------+ + | | + | | + | | + +-----------+ diff --git a/include/linux/mm.h b/include/linux/mm.h index 5f549cf6a4e8..b0798b9c6a6a 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3118,7 +3118,7 @@ p4d_t *vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node); pud_t *vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node); pmd_t *vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node); pte_t *vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, - struct vmem_altmap *altmap); + struct vmem_altmap *altmap, struct page *block); void *vmemmap_alloc_block(unsigned long size, int node); struct vmem_altmap; void *vmemmap_alloc_block_buf(unsigned long size, int node, diff --git a/mm/memremap.c b/mm/memremap.c index 2e9148a3421a..a6be2f5bf443 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -307,6 +307,7 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid) { struct mhp_params params = { .altmap = pgmap_altmap(pgmap), + .pgmap = pgmap, .pgprot = PAGE_KERNEL, }; const int nr_range = pgmap->nr_range; diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 44cb77523003..195c017c8d23 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -533,16 +533,31 @@ void __meminit vmemmap_verify(pte_t *pte, int node, } pte_t * __meminit vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, - struct vmem_altmap *altmap) + struct vmem_altmap *altmap, + struct page *reuse) { pte_t *pte = pte_offset_kernel(pmd, addr); if (pte_none(*pte)) { pte_t entry; void *p; - p = vmemmap_alloc_block_buf(PAGE_SIZE, node, altmap); - if (!p) - return NULL; + if (!reuse) { + p = vmemmap_alloc_block_buf(PAGE_SIZE, node, altmap); + if (!p) + return NULL; + } else { + /* + * When a PTE/PMD entry is freed from the init_mm + * there's a a free_pages() call to this page allocated + * above. Thus this get_page() is paired with the + * put_page_testzero() on the freeing path. + * This can only called by certain ZONE_DEVICE path, + * and through vmemmap_populate_compound_pages() when + * slab is available. + */ + get_page(reuse); + p = page_to_virt(reuse); + } entry = pfn_pte(__pa(p) >> PAGE_SHIFT, PAGE_KERNEL); set_pte_at(&init_mm, addr, pte, entry); } @@ -609,7 +624,8 @@ pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node) } static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int node, - struct vmem_altmap *altmap) + struct vmem_altmap *altmap, + struct page *reuse) { pgd_t *pgd; p4d_t *p4d; @@ -629,7 +645,7 @@ static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int node, pmd = vmemmap_pmd_populate(pud, addr, node); if (!pmd) return NULL; - pte = vmemmap_pte_populate(pmd, addr, node, altmap); + pte = vmemmap_pte_populate(pmd, addr, node, altmap, reuse); if (!pte) return NULL; vmemmap_verify(pte, node, addr, addr + PAGE_SIZE); @@ -644,7 +660,23 @@ int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end, pte_t *pte; for (; addr < end; addr += PAGE_SIZE) { - pte = vmemmap_populate_address(addr, node, altmap); + pte = vmemmap_populate_address(addr, node, altmap, NULL); + if (!pte) + return -ENOMEM; + } + + return 0; +} + +static int __meminit vmemmap_populate_range(unsigned long start, + unsigned long end, + int node, struct page *page) +{ + unsigned long addr = start; + pte_t *pte; + + for (; addr < end; addr += PAGE_SIZE) { + pte = vmemmap_populate_address(addr, node, NULL, page); if (!pte) return -ENOMEM; } @@ -652,18 +684,111 @@ int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end, return 0; } +/* + * For compound pages bigger than section size (e.g. x86 1G compound + * pages with 2M subsection size) fill the rest of sections as tail + * pages. + * + * Note that memremap_pages() resets @nr_range value and will increment + * it after each range successful onlining. Thus the value or @nr_range + * at section memmap populate corresponds to the in-progress range + * being onlined here. + */ +static bool __meminit reuse_compound_section(unsigned long start_pfn, + struct dev_pagemap *pgmap) +{ + unsigned long nr_pages = pgmap_vmemmap_nr(pgmap); + unsigned long offset = start_pfn - + PHYS_PFN(pgmap->ranges[pgmap->nr_range].start); + + return !IS_ALIGNED(offset, nr_pages) && nr_pages > PAGES_PER_SUBSECTION; +} + +static pte_t * __meminit compound_section_tail_page(unsigned long addr) +{ + pte_t *pte; + + addr -= PAGE_SIZE; + + /* + * Assuming sections are populated sequentially, the previous section's + * page data can be reused. + */ + pte = pte_offset_kernel(pmd_off_k(addr), addr); + if (!pte) + return NULL; + + return pte; +} + +static int __meminit vmemmap_populate_compound_pages(unsigned long start_pfn, + unsigned long start, + unsigned long end, int node, + struct dev_pagemap *pgmap) +{ + unsigned long size, addr; + pte_t *pte; + int rc; + + if (reuse_compound_section(start_pfn, pgmap)) { + pte = compound_section_tail_page(start); + if (!pte) + return -ENOMEM; + + /* + * Reuse the page that was populated in the prior iteration + * with just tail struct pages. + */ + return vmemmap_populate_range(start, end, node, pte_page(*pte)); + } + + size = min(end - start, pgmap_vmemmap_nr(pgmap) * sizeof(struct page)); + for (addr = start; addr < end; addr += size) { + unsigned long next = addr, last = addr + size; + + /* Populate the head page vmemmap page */ + pte = vmemmap_populate_address(addr, node, NULL, NULL); + if (!pte) + return -ENOMEM; + + /* Populate the tail pages vmemmap page */ + next = addr + PAGE_SIZE; + pte = vmemmap_populate_address(next, node, NULL, NULL); + if (!pte) + return -ENOMEM; + + /* + * Reuse the previous page for the rest of tail pages + * See layout diagram in Documentation/vm/vmemmap_dedup.rst + */ + next += PAGE_SIZE; + rc = vmemmap_populate_range(next, last, node, pte_page(*pte)); + if (rc) + return -ENOMEM; + } + + return 0; +} + struct page * __meminit __populate_section_memmap(unsigned long pfn, unsigned long nr_pages, int nid, struct vmem_altmap *altmap, struct dev_pagemap *pgmap) { unsigned long start = (unsigned long) pfn_to_page(pfn); unsigned long end = start + nr_pages * sizeof(struct page); + int r; if (WARN_ON_ONCE(!IS_ALIGNED(pfn, PAGES_PER_SUBSECTION) || !IS_ALIGNED(nr_pages, PAGES_PER_SUBSECTION))) return NULL; - if (vmemmap_populate(start, end, nid, altmap)) + if (is_power_of_2(sizeof(struct page)) && + pgmap && pgmap_vmemmap_nr(pgmap) > 1 && !altmap) + r = vmemmap_populate_compound_pages(pfn, start, end, nid, pgmap); + else + r = vmemmap_populate(start, end, nid, altmap); + + if (r < 0) return NULL; return pfn_to_page(pfn); From patchwork Wed Feb 23 19:48:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joao Martins X-Patchwork-Id: 12757473 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 79AD76AA3 for ; Wed, 23 Feb 2022 19:49:03 +0000 (UTC) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 21NIEGJE003931; Wed, 23 Feb 2022 19:48:47 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-type : mime-version; s=corp-2021-07-09; bh=mzhwY8ndxDV0XfEO1yZypVLBhOO8tBTEd6Wd8FN3Lqs=; b=flIysGKNhy3Gwh6vzYZRZ/Q8v6Derw1RDgByGfZ8ibfJFhLFPFbbzZBvTEks/u7oF7Ev KWb2FRlWJoDU95+VMf/SG7lCcylSaI5l/Lu97IPYFS0xhVRNuo7tgPFcPlvEqmCIrQzL JqRJGNk4L5ZkS3e5q9AngWbv/nlJ1Jm1Ka5F83//Dg5edsi95wD95F8//nXBegV0h4TF T3rvnCjDBnDo6dUrjOvmXJhSWV5i7Dv3odQw/Z1I8dqgJQq2u3XiOqSAGXYw+71IS0Nv 0qHDbD3x3lCNcbSoxC8Ol8zhGorLEDUSTzP8OFttoxIRmHhdNfE18+LQ6ijIG0kz6XEW Mg== Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80]) by mx0b-00069f02.pphosted.com with ESMTP id 3ect7anbpd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 23 Feb 2022 19:48:47 +0000 Received: from pps.filterd (userp3030.oracle.com [127.0.0.1]) by userp3030.oracle.com (8.16.1.2/8.16.1.2) with SMTP id 21NJfcRb055196; Wed, 23 Feb 2022 19:48:45 GMT Received: from nam10-bn7-obe.outbound.protection.outlook.com (mail-bn7nam10lp2106.outbound.protection.outlook.com [104.47.70.106]) by userp3030.oracle.com with ESMTP id 3eannwbuvm-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 23 Feb 2022 19:48:45 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=oETWn6CP5E/l6w2pClHN4mrJ2Zq9CfhMh2tJ7eP4v12JlPGOqMsukQ7zoSENgKP5EWHbEXAFqrBtwQUU/wg8OX7l0Xq7GTNXV+QzSAUe2t/9fH2kzO8nLvjFJzyxvrl0vGsIDZpOqJHH5Gcck8tUlL3o0mR9ibjyz0cQcxeb9mxWcrgsvuTgelbZ2euc8XRBWs5jiyV3AaycQd6yPSJXuCNlIXX3DH1H2EMQ2s/hEs1tvr9KcsOdC7+wxndnh00y00eUpgeGJjvNSL4KCLHR/IvyAlrPpq8U7hM8YtcXboD8o32OUPfb6+CHpG2ycPxUUGFmV9+HJV2BECKVFp1RQQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mzhwY8ndxDV0XfEO1yZypVLBhOO8tBTEd6Wd8FN3Lqs=; b=SmgyZZBXMe9jBdH2cL/9QUwCT59w9dmOrBhHkNbr58A8yEaozKBw8PNb7vAiNw0Atpy+Jeqtmgz5Rieq2XYt5+CvbH5K7qgXahsuN8Y0cSFRLboi7JeaoRE2Pt//osQAYKBrJlLu4w0zAtQiT9vR94aacF2Gpm7nLAWnI0i4Db5UxUvpNEj+7wicQQIe3erJhI9bF39IJA9zuj0pQc2Clf/gXMuEfJEHhOoCj274GmFW6Ev5B30kv0W+otaUnRZ63WbRrbH6IHkEk3wx6BKE34q7sqar0txYLgu+qiWaIYO+y7dqKjLa1BibPcxQiB4yLRI4bRasbWv3JgJ54ujUsA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mzhwY8ndxDV0XfEO1yZypVLBhOO8tBTEd6Wd8FN3Lqs=; b=XMKREw7pe888rHQDXmf1muLFDoUHsNLTtXHpRR9wSUMJCTNNOk+An3qwfShRvB0jfC4BrhM+uDmyJtKHWfktgAe+0aUxdNBr1Vtax/LdX77pqLs33c5r6nzTN6mnrqi05kaTITkfrNWnm9lwSYSxgriqgKXL5nq246H9smxgNdU= Received: from BLAPR10MB4835.namprd10.prod.outlook.com (2603:10b6:208:331::11) by BLAPR10MB4930.namprd10.prod.outlook.com (2603:10b6:208:323::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.21; Wed, 23 Feb 2022 19:48:44 +0000 Received: from BLAPR10MB4835.namprd10.prod.outlook.com ([fe80::750f:bf1d:1599:3406]) by BLAPR10MB4835.namprd10.prod.outlook.com ([fe80::750f:bf1d:1599:3406%6]) with mapi id 15.20.5017.024; Wed, 23 Feb 2022 19:48:44 +0000 From: Joao Martins To: linux-mm@kvack.org Cc: Dan Williams , Vishal Verma , Matthew Wilcox , Jason Gunthorpe , Jane Chu , Muchun Song , Mike Kravetz , Andrew Morton , Jonathan Corbet , Christoph Hellwig , nvdimm@lists.linux.dev, linux-doc@vger.kernel.org, Joao Martins Subject: [PATCH v6 5/5] mm/page_alloc: reuse tail struct pages for compound devmaps Date: Wed, 23 Feb 2022 19:48:07 +0000 Message-Id: <20220223194807.12070-6-joao.m.martins@oracle.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220223194807.12070-1-joao.m.martins@oracle.com> References: <20220223194807.12070-1-joao.m.martins@oracle.com> X-ClientProxiedBy: LO4P123CA0392.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:18f::19) To BLAPR10MB4835.namprd10.prod.outlook.com (2603:10b6:208:331::11) Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f7196245-ea61-4a20-578b-08d9f7057fec X-MS-TrafficTypeDiagnostic: BLAPR10MB4930:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 8ApdFQI+hgsKXp0ZQ3b///7qjoT6ZL8+dJ+bqJQuIfAO91EJ5xEvcVURDhLW73Ufg+TcAL8kUATWvH/Oav69LHQ3hC8InvUOD4UB/fdeGJ+rc0b45wpCqc9kGhnsWFk4EaKxphDmzUnDGTMfSqfxE5AZ4Qo/q9BOu0FCAxNa9F7/BM1U8BiSWpD0JEEspFb5AEccusZmp6ODqz0eASmoMYbROp7Hkhnf4X4dwtppoeUa5Al1ZMI2/gYvkx1MH1m0nC8+5oldpNtaPjS5pQOGIhWLrL2jMoNzyT9fouw8Dj4Mle/vabshhNLac5F0ofZHNXy71ADLbhY7nA8Q7b7cVgjrcR1P/DWoAnh2B55fh8fb/lcD1jVsVLs8a4IgGzFn3iQZH9p053axVsuFs9s7M3rwRt66bQVCpO3nCR0VMDf2BZEfSfgJgunoXi1RaHbSk43F0qnsmv7otHWD2JV19txUOvKuw5LcbKZ9pxRkadatPdAX18OnKiX1Iz2ZmTB9JSootCnZqXS16gigmJTX/D6n8u5OjUGo01ydMd3ulGbub7qdi+0v4RFHNdKkvKfVLzKsVycFROQjPkGa4cy5kdYs6YA5fbx0yrYR9bHIsluQ6xenZhCFk0K2vOkEVU0wHLxuRscpzCrUmxb8jtf0Kv7NJ5BOiH9MOR1Oeq3texGGPMz5ZBchaagbkNOKSNezn068yhpdXA3UsnAJwSftnQ== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB4835.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(83380400001)(103116003)(6916009)(36756003)(86362001)(6486002)(316002)(508600001)(54906003)(8676002)(2616005)(52116002)(107886003)(66946007)(66556008)(26005)(186003)(66476007)(1076003)(4326008)(8936002)(38100700002)(38350700002)(2906002)(5660300002)(6506007)(6512007)(6666004)(7416002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: xlyOL+RWXUYmSHmLtEGcKWyKMO8JTs351Yjy9tui9Mm6aLh7FLxcEhfhwfrIOfeOUkL9P3S8TIezKwwFulU+Gy9cn+oYBgKdIZ4iXS8PglcXRDuJwZy9eUAHkhQ/JGrLC/zjVMoLUklabVe5J7yRwziwjtnE22oXPB52NnB7lotAoedEgbjH3NN7oAfKAZm1cJiLXcLnLZH0Mk8uP3GZRpG2pS177odxOIq6Vt9NijMnniwbypTMEyMgetTtnpuJi8MEuAnMDOV26CDZIYcry+vTgD4YwOEDp8KspDVRf0wZVah7VFqHI+V/aPg1Nbz0Tl3pnWzwYyB72pYGBkzoML8/ITOXfncwKZ0Bmv7DnTjbP6LB4Q1UmZwXTheWFGK38xIXHiAI0aWtV+lqy1nrqICOK2YOMoEcFYaMbIhLlEm2iNF9ZN9srRe7YI+zV5ihp8y3wANz4MhXJ4oeenUpNa97LuUSYVFwOxPSZ4olydvcA6geEosiQc2DeNxD1Kf8rOrL/CpdveoYbij2zW3g+upteG/znC5nk9pFCZ0HaZM0iSyNSSHU9hsAENZS8sDK9TGMIwRcgP1rH1c8ja9NZykQ7ZOU2z9Cj34f50BhVqNRflk9QJGFtxln3qH/aF4b6IJwe1GD3v1pvJFt83skITfC5aE/4khd0JiNIE9rtza01GriBPLodOK2eB5m3QcVR5UaHeA0+7OPkhHOmfZT0fRAuEPQCES6SJkmYaDnbR+N99TiXVI+nzmg02lGSa3RDto1hlU+RPES6iZOVAftrJf6gn9M2i83KYbLpOTo95KTNGk8HgmbpZ6Ii8b7adJNZ+KE/y3d+ej7p9ATVDktKylnzp9EB5w0EZjjxXMy7HsbJFm6vfuh7LmlmhbQhAmuGX97+y7KsAIEo4kChOMm9qb+OWwVeHsxWSQmpwuCZfC2fHL6vRlSCvu8qtJMRG4WHLE7uPWwPt3m8Bphp1ddNoC5Y1f98xzvHb+3MTvI/AGmIiqbsD8oMXE/STH4wEJLxvpdmxkcg/9SMv+otE0i+jHh3Og986QFOrJRCS3iRceOXVqa1A7z8VpbuD2swyX4ipcklJsne0vLdoZexCffosM4kk+EgHDr7tDDH3n1a6kxFXfoZN/+EUMQkVMD4pu3gwq/Y//CErHwHMv2cA2djumsU0CPz9MHsDJtFU9kMnCO/KkXfN6+4V9R8pJu9F13iV1tl2puuVKjNgA7mP5wMDERPRkm+rc+2zrjC6ZSSnp1C1WOcFqF085XpE9Vh9eSAuEqt5WvWCCLtCvyW+GH/Jq7aZu/pSOrxuNz72S4UmQ3J8CCN4D7bPaGKx5yoEWdgNDUy+nUAXGtMnMD0eyKwTKjf5wN6iGXEzYO947rOhKl/OsbA4PBzWYe1Gi0hd58yFC1Va4/Bf0NPsaIHaxvmHbD1HLD9hzhKgQRtQuROtEl3V/nZJEOF9xsvSsv6X8onX35+wWGsRM07dkm1lxYrD1iwgHZjPS7KWqoOyFkScVfB5gTW/2lyGZYu+L1qOQJenyVnMkerJyUnp7wGK0yZwh35BMv90zzTYHoyEfjZoUvbVbOJ5HXaVUjDfltlaT+yblehP5q2UME/eQRNxFQUGsEscXtjYdUt6E59XKgcv6M7nDAWx/KljHf3f2c6CfIVcQaaBYwYnAr7yIvYvyi4g== X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: f7196245-ea61-4a20-578b-08d9f7057fec X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB4835.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2022 19:48:44.4915 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 9gm4pA5IhoRR/n4BCakSaoxpwdQY7jNTraz+sKrTwsMsGva0mcvsLeGf5/mr77rVbf36tueCTKBtNIp1rRdv+JXMrNSR03w/kPENHfRkyLQ= X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR10MB4930 X-Proofpoint-Virus-Version: vendor=nai engine=6300 definitions=10267 signatures=681306 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=0 adultscore=0 malwarescore=0 phishscore=0 mlxlogscore=999 mlxscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2201110000 definitions=main-2202230111 X-Proofpoint-GUID: QBq0lFUCZP3hYBHQjBn0EsSPVRdw5uKe X-Proofpoint-ORIG-GUID: QBq0lFUCZP3hYBHQjBn0EsSPVRdw5uKe Currently memmap_init_zone_device() ends up initializing 32768 pages when it only needs to initialize 128 given tail page reuse. That number is worse with 1GB compound pages, 262144 instead of 128. Update memmap_init_zone_device() to skip redundant initialization, detailed below. When a pgmap @vmemmap_shift is set, all pages are mapped at a given huge page alignment and use compound pages to describe them as opposed to a struct per 4K. With @vmemmap_shift > 0 and when struct pages are stored in ram (!altmap) most tail pages are reused. Consequently, the amount of unique struct pages is a lot smaller that the total amount of struct pages being mapped. The altmap path is left alone since it does not support memory savings based on compound pages devmap. Signed-off-by: Joao Martins --- mm/page_alloc.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e0c1e6bb09dd..01f10b5a4e47 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6653,6 +6653,20 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn, } } +/* + * With compound page geometry and when struct pages are stored in ram most + * tail pages are reused. Consequently, the amount of unique struct pages to + * initialize is a lot smaller that the total amount of struct pages being + * mapped. This is a paired / mild layering violation with explicit knowledge + * of how the sparse_vmemmap internals handle compound pages in the lack + * of an altmap. See vmemmap_populate_compound_pages(). + */ +static inline unsigned long compound_nr_pages(struct vmem_altmap *altmap, + unsigned long nr_pages) +{ + return !altmap ? 2 * (PAGE_SIZE/sizeof(struct page)) : nr_pages; +} + static void __ref memmap_init_compound(struct page *head, unsigned long head_pfn, unsigned long zone_idx, int nid, @@ -6717,7 +6731,7 @@ void __ref memmap_init_zone_device(struct zone *zone, continue; memmap_init_compound(page, pfn, zone_idx, nid, pgmap, - pfns_per_compound); + compound_nr_pages(altmap, pfns_per_compound)); } pr_info("%s initialised %lu pages in %ums\n", __func__,