From patchwork Thu Sep 21 07:44:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13393710 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC910CD5BD5 for ; Thu, 21 Sep 2023 07:46:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7E3EF6B0175; Thu, 21 Sep 2023 03:46:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 793E96B018A; Thu, 21 Sep 2023 03:46:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 682D76B0192; Thu, 21 Sep 2023 03:46:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 580206B0175 for ; Thu, 21 Sep 2023 03:46:48 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 3A4FD1CAD10 for ; Thu, 21 Sep 2023 07:46:48 +0000 (UTC) X-FDA: 81259822896.14.0C1AE7C Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf22.hostedemail.com (Postfix) with ESMTP id C4561C0016 for ; Thu, 21 Sep 2023 07:46:45 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=none; spf=pass (imf22.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695282406; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=e8LXOfIchvrTg88g1fmJqfa/w5XDt2uRV+C4ET9YPS8=; b=WEAb31jFl7+vRj2SIOSSNdiGBJc1AH0v86UaJ13sv8Xe/8dwVOHKFF6CpM1PvjdszQgau8 5je1PGyieHnZBxaM9XlLWC1qixA4LorF0WmcmTQmvp5nIea+DPIJQ3rY5/HkfHTn9+DHqo NmuTOd1sBLQtkWUROSrhW4MB817pYDI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695282406; a=rsa-sha256; cv=none; b=DhhIy2COLQqgcgh0fGnd+NiBr4QbMVoU8vG1LCaBKaWerx6pW/7uNtcM6yuDexMM/EUGER dn/wENsYGdD3mWx07GyNtLg7BbGoZioGLv6PBHjWhyFAw4B+OLCWhSr+oA/qIrkww533x9 6j7N1s8/GxxstD32p1fTLm49VF/1Jvg= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; spf=pass (imf22.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4RrnQt2BRGztSwP; Thu, 21 Sep 2023 15:41:42 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Thu, 21 Sep 2023 15:45:57 +0800 From: Kefeng Wang To: Andrew Morton CC: , , , , , Zi Yan , Mike Kravetz , , Kefeng Wang Subject: [PATCH v2 3/6] mm: memory: use a folio in do_numa_page() Date: Thu, 21 Sep 2023 15:44:14 +0800 Message-ID: <20230921074417.24004-4-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230921074417.24004-1-wangkefeng.wang@huawei.com> References: <20230921074417.24004-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: C4561C0016 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: sx8wsma73bkydjuj5g6der5975mgpup5 X-HE-Tag: 1695282405-22984 X-HE-Meta: U2FsdGVkX187MREPG1Kwe64R6eAD9CJ2kwzOxVAHXH6C5OXt1KfUhGEZDihhFRNOyMQkZVXlgRy2lILOdYnyuBjs2tMkJJgL1JY1JbbJZoyYHIqHobDPC4ZMsG9ZBQyBrKLGy+hkFc9BehiAR157VIlpIBE8LJ47eQaJj7VD+LsKcPNcZTNwo6pNZhWtS9qYgzdQfYQmL8+r9bXXX9qIW7i0a262rTNpZ9K1RSpmjw+4RdK46oN4ZBUA2+ga1U5+e5J//wKDC+x62RH8l26VTDz9ek7OPOvKyh+ZmVDv1KcFmGIqj1Xd+g3IXXLgUGywjO1MQ/cQJKMb+plB4JpKCjTd2AytwPMJosFzGaD4zryYr+i9uCuyT/Q06EzTsZe5KGz7YcUP+2PdXSxHgfz9XVJnNGVCiB3trMrzsRSMRZyCXODLLZSO1S3/rlOE7ouzZ+dx8sRAAxGwR8Lh97dcjaY3eHAHuUJMoxQs4LCGDNV0oANa5rCyVDdFd/DtPpisggPYvS6P8S0ACaTQRTKmQBkHrdoVcsZNXyIgpytTyttK1i5cgrH7ihmUJkhXREP0+EPNFIuVM7GXzDps853p4tQHR3y5S0rDnGkHGs1/nH4Sa7k/OAciUHqONs83VRz51DKa1JcAwQoiTHUPDq28MfEq0V+4T1RcVyWEY0aGDoEptwQTA72fCUd0zP/uLa+Wx1PEvJIRyVrCRNhPAoVRPP++mOjwYyR/RYGo3iXsh3CZ/fHx+pmcKf2e2gsnwbQtfD6dEUtPm+tSApkYvmNzp6lropaTJpAVet79hrfe/xHjlhI+3XR3PIVxXs0O/qMRY5TUvD/HH8TIKqjPqRfgG+Ot+JkRrumg9AixdgaU6ufEErR3hNDY95TTjafi2458oyAr+WxWvJWook62wHN3Lg+I3UPMB7MrEiA42iMslYwU2ZYO6o19maIX2n1B/nTDBvqjnEwsqzPzhIHSyYM YddFhPFy rNJF9XWxNj2X4twwMbuPEzkJjXGVodrxDFWAw06/RmJzft+cmvZwyHP3/bXxd3QDGkjqmWx0d9VfXO1DOgA1M7pfpNJLL7stkk0QKw5yzRYJ4OhiDSTV9QB97SVbqI2O+VvuTIMka1p6koJsNfI/Gk4xW3nEcnw5FcPkhfGJXczzSwd+JxPDNQDPKuq80kwBVO3mR X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Numa balancing only try to migrate non-compound page in do_numa_page(), use a folio in it to save several compound_head calls, note we use folio_estimated_sharers(), it is enough to check the folio sharers since only normal page is handled, if large folio numa balancing is supported, a precise folio sharers check would be used, no functional change intended. Signed-off-by: Kefeng Wang --- mm/memory.c | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index dbc7b67eca68..a05cfb6be36d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4747,8 +4747,8 @@ int numa_migrate_prep(struct page *page, struct vm_area_struct *vma, static vm_fault_t do_numa_page(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; - struct page *page = NULL; - int page_nid = NUMA_NO_NODE; + struct folio *folio = NULL; + int nid = NUMA_NO_NODE; bool writable = false; int last_cpupid; int target_nid; @@ -4779,12 +4779,12 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) can_change_pte_writable(vma, vmf->address, pte)) writable = true; - page = vm_normal_page(vma, vmf->address, pte); - if (!page || is_zone_device_page(page)) + folio = vm_normal_folio(vma, vmf->address, pte); + if (!folio || folio_is_zone_device(folio)) goto out_map; /* TODO: handle PTE-mapped THP */ - if (PageCompound(page)) + if (folio_test_large(folio)) goto out_map; /* @@ -4799,34 +4799,34 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) flags |= TNF_NO_GROUP; /* - * Flag if the page is shared between multiple address spaces. This + * Flag if the folio is shared between multiple address spaces. This * is later used when determining whether to group tasks together */ - if (page_mapcount(page) > 1 && (vma->vm_flags & VM_SHARED)) + if (folio_estimated_sharers(folio) > 1 && (vma->vm_flags & VM_SHARED)) flags |= TNF_SHARED; - page_nid = page_to_nid(page); + nid = folio_nid(folio); /* * For memory tiering mode, cpupid of slow memory page is used * to record page access time. So use default value. */ if ((sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) && - !node_is_toptier(page_nid)) + !node_is_toptier(nid)) last_cpupid = (-1 & LAST_CPUPID_MASK); else - last_cpupid = page_cpupid_last(page); - target_nid = numa_migrate_prep(page, vma, vmf->address, page_nid, - &flags); + last_cpupid = page_cpupid_last(&folio->page); + target_nid = numa_migrate_prep(&folio->page, vma, vmf->address, nid, + &flags); if (target_nid == NUMA_NO_NODE) { - put_page(page); + folio_put(folio); goto out_map; } pte_unmap_unlock(vmf->pte, vmf->ptl); writable = false; /* Migrate to the requested node */ - if (migrate_misplaced_folio(page_folio(page), vma, target_nid)) { - page_nid = target_nid; + if (migrate_misplaced_folio(folio, vma, target_nid)) { + nid = target_nid; flags |= TNF_MIGRATED; } else { flags |= TNF_MIGRATE_FAIL; @@ -4842,8 +4842,8 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) } out: - if (page_nid != NUMA_NO_NODE) - task_numa_fault(last_cpupid, page_nid, 1, flags); + if (nid != NUMA_NO_NODE) + task_numa_fault(last_cpupid, nid, 1, flags); return 0; out_map: /*