From patchwork Wed Jun 5 13:33:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13686922 X-Patchwork-Delegate: kuba@kernel.org Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DD90219B5AB; Wed, 5 Jun 2024 13:36:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.191 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717594594; cv=none; b=N2SxEfr2OXDgorQBCb5diH+HOhe/VZngoFCBiFncII9g0xUmKKv9uSRSzN3BCy87xtU26qU/Toy5isV8xwaN6nxQfpXmv5uT31FzoWzRiEL+BgoRfN+jDytsSvl+zl601OBbgLatsInuPpyIExCLBndTUPvQyxJuhj38oCrPOdU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717594594; c=relaxed/simple; bh=LgnFinjm8bdkLHMCxsk17VoxCQf84zIa9jQqZ41OqK0=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=jRFTf5bybbeBFaphE7hedGEAjrazLhrq0G7FLpzwrSx89l0xt81qfzSmTE/IWAkonOLdb/qpKFRZ/7pR4keLnPeOlth1JL+PQdIvbaf2LbQz4Z7EJZBgVWRm8yZUR/ccQC4mf9odVl8mdIqYIi6srR2V58ZfKpE+xdd5yPHwPH0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.191 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4VvT311tfpz1HD9v; Wed, 5 Jun 2024 21:34:37 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id C0F3314011B; Wed, 5 Jun 2024 21:36:24 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 5 Jun 2024 21:36:24 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Andrew Morton , Subject: [PATCH net-next v6 10/15] mm: page_frag: use __alloc_pages() to replace alloc_pages_node() Date: Wed, 5 Jun 2024 21:33:00 +0800 Message-ID: <20240605133306.11272-11-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240605133306.11272-1-linyunsheng@huawei.com> References: <20240605133306.11272-1-linyunsheng@huawei.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500005.china.huawei.com (7.185.36.74) X-Patchwork-Delegate: kuba@kernel.org There are more new APIs calling __page_frag_cache_refill() in this patchset, which may cause compiler not being able to inline __page_frag_cache_refill() into __page_frag_alloc_va_align(). Not being able to do the inlining seems to casue some notiable performance degradation in arm64 system with 64K PAGE_SIZE after adding new API calling __page_frag_cache_refill(). It seems there is about 24Bytes binary size increase for __page_frag_cache_refill() and __page_frag_cache_refill() in arm64 system with 64K PAGE_SIZE. By doing the gdb disassembling, It seems we can have more than 100Bytes decrease for the binary size by using __alloc_pages() to replace alloc_pages_node(), as there seems to be some unnecessary checking for nid being NUMA_NO_NODE, especially when page_frag is still part of the mm system. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- mm/page_frag_cache.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index b5ad6e9d316d..525b577b03a9 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -65,11 +65,11 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; - page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, - PAGE_FRAG_CACHE_MAX_ORDER); + page = __alloc_pages(gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER, + numa_mem_id(), NULL); #endif if (unlikely(!page)) { - page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); + page = __alloc_pages(gfp, 0, numa_mem_id(), NULL); if (unlikely(!page)) { memset(nc, 0, sizeof(*nc)); return NULL;