From patchwork Mon Sep 25 23:48:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Mike Kravetz X-Patchwork-Id: 13398574 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96A00CE79A7 for ; Mon, 25 Sep 2023 23:49:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8D5EE8D004F; Mon, 25 Sep 2023 19:49:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 883048D0007; Mon, 25 Sep 2023 19:49:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7007D8D004F; Mon, 25 Sep 2023 19:49:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 61C528D0007 for ; Mon, 25 Sep 2023 19:49:21 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 4114B160C4C for ; Mon, 25 Sep 2023 23:49:21 +0000 (UTC) X-FDA: 81276763722.11.FEAEDAD Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf06.hostedemail.com (Postfix) with ESMTP id ECFC418000C for ; Mon, 25 Sep 2023 23:49:17 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2023-03-30 header.b=rRBY7d0t; dkim=pass header.d=oracle.onmicrosoft.com header.s=selector2-oracle-onmicrosoft-com header.b=ejB3xmZ2; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf06.hostedemail.com: domain of mike.kravetz@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=mike.kravetz@oracle.com; arc=reject ("signature check failed: fail, {[1] = sig:microsoft.com:dns request to arcselector9901._domainkey.microsoft.com failed: no records with this name}") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695685758; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=e3d5lxM5i7Yf1CglmpEJ7qKUrP8Ep3pehO003LFZGic=; b=1CQOk4ZTwIiJq5DUgTH2PDUgsDti0ih6VVR+lBc/uHc3Je4z67H1C7CftCM1c/H9WBBxEH lFPDXY/2J2LLaQN9nL28ovcMEPJOaL6cFBfkf5FLdqfHx4niIV7FQFWZXc6hMKrZpNejYI 3hHj1aYD9rfAa6yVNJKzokcRVMFcmDk= ARC-Authentication-Results: i=2; imf06.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2023-03-30 header.b=rRBY7d0t; dkim=pass header.d=oracle.onmicrosoft.com header.s=selector2-oracle-onmicrosoft-com header.b=ejB3xmZ2; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf06.hostedemail.com: domain of mike.kravetz@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=mike.kravetz@oracle.com; arc=reject ("signature check failed: fail, {[1] = sig:microsoft.com:dns request to arcselector9901._domainkey.microsoft.com failed: no records with this name}") ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1695685758; a=rsa-sha256; cv=fail; b=iGD/GxtsB5qHn3LTAF2SvUQjBPXne3TCSYbv7MWgaffIudOYdoHtwb9oSfTCHQ5qYP7Rcl tQl2ZuNOqfhhG9eE6oq7EAi+MHX8eD0VWghJoXMRWLQbJS8KyzzSdg7C8i0U+ZAlFSAuex 9LX19uPKfQWVgi2ol51VX1vTz5aa+ws= Received: from pps.filterd (m0246629.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38PNEORT018799; Mon, 25 Sep 2023 23:48:47 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : content-type : content-transfer-encoding : mime-version; s=corp-2023-03-30; bh=e3d5lxM5i7Yf1CglmpEJ7qKUrP8Ep3pehO003LFZGic=; b=rRBY7d0tGvljZMpmHwlegwdhMoKUktycPFlKDAgZ7NViAFyWC12HYQtTgAthTgyEA25/ 6vYjXJe9Pr6bfRKBlK0PiqgQz+hMFNc0QrNS9I3gVf5wQioltaMe6ewLieyl2EX2kuC8 yTesc8nfl9GWT9KepMN9lZyP0qu95zt8iXX0xj47KVsOapTQLu2A63czZxOS68Y21Ce0 Wc2W7yPqDfjnl3T5v9tNJ58LwT/Gd1E8ZvGjfQFBRMO+5skBqe57D+Gw2M8wqU1RoECg yqd2V6oFnav2DasSM7aiaE+dP3GmFf6Tgv8S01X2xJudaZovjf1t/M14+K8V5mw8DqzP fw== Received: from phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta03.appoci.oracle.com [138.1.37.129]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3t9qwbd5jr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 25 Sep 2023 23:48:46 +0000 Received: from pps.filterd (phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (8.17.1.19/8.17.1.19) with ESMTP id 38PMFvIt035024; Mon, 25 Sep 2023 23:48:46 GMT Received: from nam12-mw2-obe.outbound.protection.outlook.com (mail-mw2nam12lp2046.outbound.protection.outlook.com [104.47.66.46]) by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 3t9pf5ghj6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 25 Sep 2023 23:48:46 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WodVeQ22lOzvv8NwsaiDnC7UAIEwYKKO2hznSbHfBFkdL2MVw1o45d+GF9JAPYXpKIdvcZvRwz8Mi6UAXoxWiPtJQH4nivapNvxfBsOGOC+LZqKLhgwYc41Eurd4NyAV/dN/KCep13XIbwJr1dV4ljUgUlTXdpd+a5g2/x0lvq9sAbxHe9/P5+TvTtIBcDoBaqJaSOPFDBUmPVggiYhTivWepKWNIxpJZv0W5DBO6t65ah9U1fV4zuFYPNmaSDlX8T82I6qYee/mtFMxAmqXwI2kEGUPuhsaQz6RoMxbYpkuiDKt7qg3PWdEg/gqZy1j/sj7qVGDLPT1GKZcDlW/uw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=e3d5lxM5i7Yf1CglmpEJ7qKUrP8Ep3pehO003LFZGic=; b=GQoA79v2/RNWRlNXSBejznW7PhMWzxBckLK917rei8H2ZgHRmsNLunR0xXT38vPhed4P4/dWH4VeABeCiPpgNpmIv/qhiYm07XNYDWw0HYCxmB4ncSbFplCQSjxds9qNV1qI2TJWvh+tj4kacuiCIGZ9KlREuES8EnLt8cipAeasQTt70UKWgCYFxHSoP9e7NsT1DbwbmFFT8RiTbQVS5fqd+ghX7YTBQKD8ZXreKBiGcvhY3JUk/iqHrH87FLW+IJBUsBbP2y8WxmA9NwkVqCK4ADatsiW61sSIdY37TCL2gqOwN07kPPa8LcCHX0fubw8PIvMYk8XooKewYCnBSg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=e3d5lxM5i7Yf1CglmpEJ7qKUrP8Ep3pehO003LFZGic=; b=ejB3xmZ27UHlFOFQFG/Oaw+ABu2RHtKeSf0/xgGu31VdGYEZuMSpMy6O0+7bFNHYzXOI1Sv0iOcFkHqo+LDlkwNkUh+glkLnWl42pWuJT+8AngXfoogXxHDPMcuZ6QIIkSSH1n3QvDc2ugtQqkuZuwVJR+T1IK+xoRil5NHBxV8= Received: from BY5PR10MB4196.namprd10.prod.outlook.com (2603:10b6:a03:20d::23) by SA2PR10MB4523.namprd10.prod.outlook.com (2603:10b6:806:113::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6813.23; Mon, 25 Sep 2023 23:48:43 +0000 Received: from BY5PR10MB4196.namprd10.prod.outlook.com ([fe80::c621:12ca:ba40:9054]) by BY5PR10MB4196.namprd10.prod.outlook.com ([fe80::c621:12ca:ba40:9054%5]) with mapi id 15.20.6813.027; Mon, 25 Sep 2023 23:48:43 +0000 From: Mike Kravetz To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Muchun Song , Joao Martins , Oscar Salvador , David Hildenbrand , Miaohe Lin , David Rientjes , Anshuman Khandual , Naoya Horiguchi , Barry Song <21cnbao@gmail.com>, Michal Hocko , Matthew Wilcox , Xiongchun Duan , Andrew Morton , Mike Kravetz Subject: [PATCH v6 0/8] Batch hugetlb vmemmap modification operations Date: Mon, 25 Sep 2023 16:48:28 -0700 Message-ID: <20230925234837.86786-1-mike.kravetz@oracle.com> X-Mailer: git-send-email 2.41.0 X-ClientProxiedBy: MW4PR03CA0099.namprd03.prod.outlook.com (2603:10b6:303:b7::14) To BY5PR10MB4196.namprd10.prod.outlook.com (2603:10b6:a03:20d::23) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BY5PR10MB4196:EE_|SA2PR10MB4523:EE_ X-MS-Office365-Filtering-Correlation-Id: cd40ba7c-8478-4037-6ac8-08dbbe21f374 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: bAdR+LBVD5B6MZlfSQuI9Nrz03T1N1UOIwicWgvcH6Nzd2uE40BEnjBc9f96rVR247FIJmD8AmmnRExHzQCL1YR/St0urs9Gy4rP3yu10szWfJFcibFgiurTCvP3zQ70Om3rkTKf4UBysZK/t6rJ9t+/PkMayTwbvkw3LC7jH02r6hLexiq1IcM2osnozog0sl92aatwV3K098S/4kDnoKj8R9S3cbCjY0sCCauBxHcP7f4VLCZLENSmx2N9pRiKVMliqIoNxJRx6tISsCy2TuMWtPtr+K1g67d8AmjJQ5fdEEDBfePBHWenMUKb7CmSUZ/y++L5HJfgSam8drcrBVCXsvfXZgXFxp7FAAg25omfeT+W95jW+Awa5AbXx9hsKSztkz8Av1wgE+JlefvnEzxRMY3unda1Syu1r2H7EA1zG8SKXh+S4LTWakCeR/Lhl/m7psU17PuwXBs+jA4fUDR4D77RO8ft0imY3iGJuOhvfS/xfwyVDGvet3xYCFn/MlqB/Do8GzTb1m4Pk7hOvJWJyEjC1+U+S9cZemY9WeW0ou0OtYkRBIi1U2hARz8q X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BY5PR10MB4196.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(39860400002)(396003)(366004)(346002)(136003)(376002)(230922051799003)(451199024)(186009)(1800799009)(2906002)(7416002)(1076003)(66899024)(6666004)(6506007)(66476007)(6486002)(5660300002)(316002)(44832011)(36756003)(8676002)(4326008)(66556008)(41300700001)(54906003)(8936002)(2616005)(66946007)(86362001)(6512007)(107886003)(26005)(83380400001)(38100700002)(478600001);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?q?/ftdx2evuWRlYX0ablHSzD31JXGz?= =?utf-8?q?y1zTo/RyVx9+FxMuPtgr9Z0JFe39lQ1ykDI+Gs+rIIF5SBpJUjXFO+8wf/FlcJQ56?= =?utf-8?q?KasogGBOZABNXsNZuFErID0RJkHWqBgtJikFOrMjf1Yhz2gX0fEovBLD6/QsyUtTI?= =?utf-8?q?NsevCwSLtki0kdafP99crGP7becyT/3NWYM73erpsAwQC5HL0EWoTJ4bT+0lGLBfi?= =?utf-8?q?E+ZAu1HKtNHdQg8PtnJny09qeXM0F4BqMD7gasII/2sF1qvC5XRwGWe55HKE4u5dU?= =?utf-8?q?MbiNvbDP24tcTtSubmhPG3hE0Od6AhRjsmGFvwvz1uVq7XrPgUuUn9rzm6RKF+T3w?= =?utf-8?q?eIQummRqWTDIruRaraHNK16I8sFarEihO3SyJSDRafGYzOfySQIkGfcpijy1Tnb7K?= =?utf-8?q?IoAw2BRLsweMHqCXx2meVgahzu4OvMwgiL/70pPY92L4lJJW8qutKe9FTpVYJ0EjA?= =?utf-8?q?E+THi3X0eQ89DUxy6NW+XRpFzQ33wjqTCZnEXx0y0NzjSCyB1tzrOzmdBGyMuN0QX?= =?utf-8?q?yfO0b/YsYZgE02PWBIE7fg5bmV0bjgxvasH+lECtGMH7VoI4KpuS/tb8fMWXFGM20?= =?utf-8?q?ca7gMmdoHsTA2VvjJJL0604JCIJ5dfXq54QZ+pWtff2z1jj3QN2He2stWXAhs0SuN?= =?utf-8?q?WBTOtXsFNjUkwP/8AahyT10KgD5LZQpX4KB9o440Vt3PCV2Nw81zgaFYECa/6LXse?= =?utf-8?q?oYpQO4OiBX3nfcMLba49zfnhuwE4Vg9q1LUnF8hxScCKoWLGlrDsyziz4DLkpeGO0?= =?utf-8?q?2KOi1XB3onWXQDtQdKdLWT8e8+f436fUeds2XOAfw2rYDj1FpGO2hSdAqnuf+qOUw?= =?utf-8?q?1xf1kjfPlI78yVfBGq7H/bPRkwsiDqRXC+ZWK7qYtSO01H1uN6QmpEJPkfUTNiTlF?= =?utf-8?q?kidze7S6IjDfmNP5E+shaADvuEsS3smx3Zr+tcDs36LhC9Novu7BeZqc1z+ZXhLVv?= =?utf-8?q?66IyI1XHpOQAkiym1623gaer7VqiiWFckBsH0HIdzKpwzJfxeXS69nZVubwcmYBaz?= =?utf-8?q?vJmJUkcxK3AObJm/qlm6tWJ5+LJ3w4u4wad09uBId3+OsRYcY6MRno6Ie4SeKSKPn?= =?utf-8?q?Sc1WbcwVJzpfVHKWBiP1ONs+p0TwmVSuLyYGevkmxIGIK6CeFZacV5XigTFZMEGkG?= =?utf-8?q?qutefQxj2fCZRh2wNEzCNrINgO8RftaOXVJbtUUGoiPN9AoPwTwZ7ib8Rc+bsAl1G?= =?utf-8?q?ihG6Io0mw85++mm+1vLMqb1o6x/AixTP8EiUfB2sWzKBmM4xPCPzdn/rKRlW7764J?= =?utf-8?q?q7A4kUVm9PvD+iUGXWaVAMUmQPcJVIPFKtcqOEdhnVtVig36vMNf4z6vxIYj6FOqX?= =?utf-8?q?VtCsux69I82kUBNmPSV2CScGcBZ0pAIjQE5vkMObM2JE4zet0Vmfy/FEkvWNpW3cW?= =?utf-8?q?kTJdLaP/UPzJ/STu9BtjLTZI+bjvfOghMzT5E39daIV7WviW6mt53qGtCC/BKxDLQ?= =?utf-8?q?5ak+btJoDObcs4R71ewvfBwLddt8u45Cg30QCiznkDneyCoLnNPXZ29Q4CdOB7uI4?= =?utf-8?q?oAa6gPcdbZmZS5PzVt7HjtNcVh6qutND/Q=3D=3D?= X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: =?utf-8?q?+1J9cEbDzIv/tueh?= =?utf-8?q?DLmsAhwyAcTgm3mv0vPva0dKr6LaAw6v95UvpKZRSbsKAFxijEIX1dJ3d/1ObNqWE?= =?utf-8?q?bDWzymASfU2/6ScDc61OwCM4hKcNqoZgAKntst2ZvNjVVKLKLh1oHWB5ZIXO7DtEP?= =?utf-8?q?sIUPcJ1iZYFakw7Kp43YwSi8zLwyOnpDiYRtZdolMIm57uJ5X5FDCe8W8A/wXIzP1?= =?utf-8?q?ofaI+mUT/UAdN5lV80qEH3mMKDY69YtXHGmXit5QCs0qink+o2a4IesBZ+a6IfSZn?= =?utf-8?q?WT18vcIorEx3WmmsLXsAeqnPUTNki0HRgaXXDegoNQY+4kR2Z2epb9riYI9wEy8xa?= =?utf-8?q?2MK2FMct2YfEvqa6e+PB7OcrJsljCJdPp0vw/nZuMBV1888kqGSW1HXze0i17Ki97?= =?utf-8?q?pO8DUihO4PQn1eAOG/2ffuC/9ov54n+j0/MeT/i/utUZXMm1DUdhiN4eqr3HRElUS?= =?utf-8?q?gN17SPyWGHnuSd0OuGa9JggzYJ9SoF3g2ytKYIsR5JCLBVvjD0NqoqNeOic5IwfaL?= =?utf-8?q?gkSx0axoc+KC50UcjwzjkRa4lM0570Lq3JYrE5WYnClwNDhj0KHlV7QYQnABfktkA?= =?utf-8?q?0JgEZVuDrvCcn16rWGuhfVa+SIWhxql7UIE8N9OS5pBS2c5+73bjJgnPtRZnXomtA?= =?utf-8?q?sfWamoBbIAbL/6M9pgvjaAbagivky6gqR/P5BTEBKKDb+d23wv/50k1VvKXWNfG0Y?= =?utf-8?q?Swsb62xhJO2bifxWAAhh5xygBwS028az6Pi8u2TrYScPHXI1bcTRo9u0OW9uaBeDG?= =?utf-8?q?x0jLVzIzaWAeaP08a/klheCb9fz/znL/N5VcR8eh+VM8bnVxk8UB4kdC0ae7z0kE/?= =?utf-8?q?+xbZ/FRAgiKobf0GrAD+mIGvWUE84aPmYf9kcMLSP0N88HqRbP1Mjxz2mVf3x6Ao5?= =?utf-8?q?gYoffJbrhtPdiH5sar8Qh9dtIYVl/9Wen6tB08iPq2Vf6vzeFBM3rK6H8EpWwqIHD?= =?utf-8?q?l4Eh3yoxyd1MvhJVqquLR18BNbhkT8FAoZlJbpwnZ3aUVkX4daITapni0iYaJW+3x?= =?utf-8?q?8DQFT28KK/51jIiGj5VcHcViDojuI17lIGklLDOT/WuEPFmkVQelKPmzmR1k9VZhI?= =?utf-8?q?mFkvX/F3Z03zJ1fVDs4r4iRNWKtMaPw?= X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: cd40ba7c-8478-4037-6ac8-08dbbe21f374 X-MS-Exchange-CrossTenant-AuthSource: BY5PR10MB4196.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Sep 2023 23:48:43.2513 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 2JrOa0QvjSzcKxQLOglf5LpcctxFgTHl4T/S37qRfC5TqKOGr6wlFSImn6s8izWXyCBSsgnI+DZ5t8SWezcPWw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR10MB4523 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-09-25_18,2023-09-25_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 bulkscore=0 mlxscore=0 mlxlogscore=999 suspectscore=0 phishscore=0 adultscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2309180000 definitions=main-2309250185 X-Proofpoint-GUID: uKFOuJIWbeeeAY6V26qHup2el310G616 X-Proofpoint-ORIG-GUID: uKFOuJIWbeeeAY6V26qHup2el310G616 X-Rspamd-Queue-Id: ECFC418000C X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: b5diarbzy4cwtb5ah9y1q6jk7ohtdhdm X-HE-Tag: 1695685757-517414 X-HE-Meta: U2FsdGVkX19XEOWObbECSe7PglS/azuuHLXp/Z5U+2TGcgHDnMfsIN19D5ygpiyTrA9SzPbPChCR0huXoX4FZfCEa6h/mOnGUGd+zwUfEvrv/HxUHHj3i9mmmQVCe8Y+ASUD+Ofm9cAjOiH4LVOkaWO1/G0nbxWSAybGc5SMlUtUaaMmEiz3PjOwEL5PDoAhZyPf+rGtK/82yGHozC6tfG4wh44T2caNNJwiAByxlsZr5na8fGfqVtBvfgdDwYXoYrN/K/l7+kIw6f6VHnN5/z+nWiVlglCv3G9ZxPQBkUyq7Y/k0IZa2FfIuEWSzSJvLrRdQG3UP15Vx0O0Cy342t2MJwepBUMoXfzvR3679jAo1SVROjHyMeSInUBa/sy0ncfNxUki593HeMRsfOP0xGIsRyX5heV9LrL+3/HW4tTHYrQj0BcedJBrQWoXoz9iv1R/6LTRTxqEKzVNJUzINkD96XIEz9hUcIVB2phlzWET3vIAFbBqx9sU2NtryT4xk0YiaVsBxKnQGiFgM85IdOVbC4T8sZCJ1S5C2gBSPrpn37hof3zUuk6N6w995yp2DbO/zYvJ9WiwGRxbxacAdciVeo/t5bO7P5RiKIMm6f9MMvrDFNB/RD5FWXxWTQpbliWnmG1Yyjil1BDz/y89wAZQY5kfDx803VJjm2GkbtU/n+63ejvq7M8bLZ102WwxW6met1scSK89CMiqiEQpYb8Aqs3/ulPPAAxHiLwmchLGz6lnZrEThW38gf3yz33fCzhzxSlavvFI92DNrfS4JV1DTHUSezo1YTHyArgT9yqxnikMzi6zFXfHhPsU3/XLrFzT/iqx4JsH629KetuG9dQkwFUYYGkeGDCvx6jj1nDTei5PtSBKtxN/RGaySURmG3kGmSrUI+Bxlw4FnshDSQWOB9oIXkuFh5A7hgQeW3+O4/NUXzGTt7HFTN+CrtZgPkbUT0hs/Cek6rMWA2s 81itkl2k eJfhzFnaIhaQ7wPbr9+1Cozey5qxoToCHDjRXR/lmxdN0KTqFKZtewW+HNoKk2skHpQHjsa41j0j0P5pLZj/rwBn+JT4RPgMFZiAPqvU0ipmb/IPjm1rNYWI/l48HcNpN8p3DCl2zdO6ZE2pQ0NYl1nqR5e2/0VRIpJWV5BGpyDR9BqcLo4CsvaKymmiF/+Sx/I6bG6rXnNE45tTUUjtfiLaZ2+JDSfR6v75c6gctAE6fb4q0geqeTO79UqQuFTezqnOecdiD5qN0zDShpGlFtwwtTVXxciwx24ABEeUzBGUDAIpDp8adKEGLk44MG0rPSDnIgnyPbnosoCW3mUv//nH+mRSGmPfzXUZLLcErxtL30Ak0XUHSGwHlfbMfixhDW8AwOwyclPbCM7jGVCdrcZA3xV7J7ZxFLxLhdOHiR8Nq89fNJ0JAIIgfR91f+/0Coql3L3J4tdiTbtodogWAw8QoUnI+fXaR3P0IkKTIv1V/t9R/WzHTFKtn3XHh38h+VPqz7Hhe/hWTU6S+SXotIYW9s+r6PnxV8NIWiFfzEgZSzUw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When hugetlb vmemmap optimization was introduced, the overhead of enabling the option was measured as described in commit 426e5c429d16 [1]. The summary states that allocating a hugetlb page should be ~2x slower with optimization and freeing a hugetlb page should be ~2-3x slower. Such overhead was deemed an acceptable trade off for the memory savings obtained by freeing vmemmap pages. It was recently reported that the overhead associated with enabling vmemmap optimization could be as high as 190x for hugetlb page allocations. Yes, 190x! Some actual numbers from other environments are: Bare Metal 8 socket Intel(R) Xeon(R) CPU E7-8895 ------------------------------------------------ Unmodified next-20230824, vm.hugetlb_optimize_vmemmap = 0 time echo 500000 > .../hugepages-2048kB/nr_hugepages real 0m4.119s time echo 0 > .../hugepages-2048kB/nr_hugepages real 0m4.477s Unmodified next-20230824, vm.hugetlb_optimize_vmemmap = 1 time echo 500000 > .../hugepages-2048kB/nr_hugepages real 0m28.973s time echo 0 > .../hugepages-2048kB/nr_hugepages real 0m36.748s VM with 252 vcpus on host with 2 socket AMD EPYC 7J13 Milan ----------------------------------------------------------- Unmodified next-20230824, vm.hugetlb_optimize_vmemmap = 0 time echo 524288 > .../hugepages-2048kB/nr_hugepages real 0m2.463s time echo 0 > .../hugepages-2048kB/nr_hugepages real 0m2.931s Unmodified next-20230824, vm.hugetlb_optimize_vmemmap = 1 time echo 524288 > .../hugepages-2048kB/nr_hugepages real 2m27.609s time echo 0 > .../hugepages-2048kB/nr_hugepages real 2m29.924s In the VM environment, the slowdown of enabling hugetlb vmemmap optimization resulted in allocation times being 61x slower. A quick profile showed that the vast majority of this overhead was due to TLB flushing. Each time we modify the kernel pagetable we need to flush the TLB. For each hugetlb that is optimized, there could be potentially two TLB flushes performed. One for the vmemmap pages associated with the hugetlb page, and potentially another one if the vmemmap pages are mapped at the PMD level and must be split. The TLB flushes required for the kernel pagetable, result in a broadcast IPI with each CPU having to flush a range of pages, or do a global flush if a threshold is exceeded. So, the flush time increases with the number of CPUs. In addition, in virtual environments the broadcast IPI can’t be accelerated by hypervisor hardware and leads to traps that need to wakeup/IPI all vCPUs which is very expensive. Because of this the slowdown in virtual environments is even worse than bare metal as the number of vCPUS/CPUs is increased. The following series attempts to reduce amount of time spent in TLB flushing. The idea is to batch the vmemmap modification operations for multiple hugetlb pages. Instead of doing one or two TLB flushes for each page, we do two TLB flushes for each batch of pages. One flush after splitting pages mapped at the PMD level, and another after remapping vmemmap associated with all hugetlb pages. Results of such batching are as follows: Bare Metal 8 socket Intel(R) Xeon(R) CPU E7-8895 ------------------------------------------------ next-20230824 + Batching patches, vm.hugetlb_optimize_vmemmap = 0 time echo 500000 > .../hugepages-2048kB/nr_hugepages real 0m4.719s time echo 0 > .../hugepages-2048kB/nr_hugepages real 0m4.245s next-20230824 + Batching patches, vm.hugetlb_optimize_vmemmap = 1 time echo 500000 > .../hugepages-2048kB/nr_hugepages real 0m7.267s time echo 0 > .../hugepages-2048kB/nr_hugepages real 0m13.199s VM with 252 vcpus on host with 2 socket AMD EPYC 7J13 Milan ----------------------------------------------------------- next-20230824 + Batching patches, vm.hugetlb_optimize_vmemmap = 0 time echo 524288 > .../hugepages-2048kB/nr_hugepages real 0m2.715s time echo 0 > .../hugepages-2048kB/nr_hugepages real 0m3.186s next-20230824 + Batching patches, vm.hugetlb_optimize_vmemmap = 1 time echo 524288 > .../hugepages-2048kB/nr_hugepages real 0m4.799s time echo 0 > .../hugepages-2048kB/nr_hugepages real 0m5.273s With batching, results are back in the 2-3x slowdown range. This series is based on mm-unstable (September 24) Changes v5 -> v6: - patch 4 in bulk_vmemmap_restore_error remove folio from list before calling add_hugetlb_folio. - Added Muchun RB for patches 2 and 3 Changes v4 -> v5: - patch 3 comment style updated, unnecessary INIT_LIST_HEAD - patch 4 updated hugetlb_vmemmap_restore_folios to pass back number of restored folios in non-error case. In addition, routine passes back list of folios with vmemmmap. Naming more consistent. - patch 5 remover over optimization and added Muchun RB - patch 6 break and early return in ENOMEM case. Updated comments. Added Muchun RB. - patch 7 Updated comments about splitting failure. Added Muchun RB. - patch 8 Made comments consistent. Changes v3 -> v4: - Rebased on mm-unstable and dropped requisite patches. - patch 2 updated to take bootmem vmemmap initialization into account - patch 3 more changes for bootmem hugetlb pages. added routine prep_and_add_bootmem_folios. - patch 5 in hugetlb_vmemmap_optimize_folios on ENOMEM check for list_empty before freeing and retry. This is more important in subsequent patch where we flush_tlb_all after ENOMEM. Changes v2 -> v3: - patch 5 was part of an earlier series that was not picked up. It is included here as it helps with batching optimizations. - patch 6 hugetlb_vmemmap_restore_folios is changed from type void to returning an error code as well as an additional output parameter providing the number folios for which vmemmap was actually restored. The caller can then be more intelligent about processing the list. - patch 9 eliminate local list in vmemmap_restore_pte. The routine hugetlb_vmemmap_optimize_folios checks for ENOMEM and frees accumulated vmemmap pages while processing the list. - patch 10 introduce flags field to struct vmemmap_remap_walk and VMEMMAP_SPLIT_NO_TLB_FLUSH for not flushing during pass to split PMDs. - patch 11 rename flag VMEMMAP_REMAP_NO_TLB_FLUSH and pass in from callers. Changes v1 -> v2: - patch 5 now takes into account the requirement that only compound pages with hugetlb flag set can be passed to vmemmmap routines. This involved separating the 'prep' of hugetlb pages even further. The code dealing with bootmem allocations was also modified so that batching is possible. Adding a 'batch' of hugetlb pages to their respective free lists is now done in one lock cycle. - patch 7 added description of routine hugetlb_vmemmap_restore_folios (Muchun). - patch 8 rename bulk_pages to vmemmap_pages and let caller be responsible for freeing (Muchun) - patch 9 use 'walk->remap_pte' to determine if a split only operation is being performed (Muchun). Removed unused variable and hugetlb_optimize_vmemmap_key (Muchun). - Patch 10 pass 'flags variable' instead of bool to indicate behavior and allow for future expansion (Muchun). Single flag VMEMMAP_NO_TLB_FLUSH. Provide detailed comment about the need to keep old and new vmemmap pages in sync (Muchun). - Patch 11 pass flag variable as in patch 10 (Muchun). Joao Martins (2): hugetlb: batch PMD split for bulk vmemmap dedup hugetlb: batch TLB flushes when freeing vmemmap Mike Kravetz (6): hugetlb: optimize update_and_free_pages_bulk to avoid lock cycles hugetlb: restructure pool allocations hugetlb: perform vmemmap optimization on a list of pages hugetlb: perform vmemmap restoration on a list of pages hugetlb: batch freeing of vmemmap pages hugetlb: batch TLB flushes when restoring vmemmap mm/hugetlb.c | 301 ++++++++++++++++++++++++++++++++++++------- mm/hugetlb_vmemmap.c | 273 +++++++++++++++++++++++++++++++++------ mm/hugetlb_vmemmap.h | 15 +++ 3 files changed, 506 insertions(+), 83 deletions(-)