From patchwork Tue Aug 11 02:24:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abel Wu X-Patchwork-Id: 11708467 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3F701138C for ; Tue, 11 Aug 2020 02:25:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 10CED20771 for ; Tue, 11 Aug 2020 02:25:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 10CED20771 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 110FC6B0003; Mon, 10 Aug 2020 22:25:07 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0E8BD6B0005; Mon, 10 Aug 2020 22:25:07 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F3FD96B0006; Mon, 10 Aug 2020 22:25:06 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0004.hostedemail.com [216.40.44.4]) by kanga.kvack.org (Postfix) with ESMTP id E06056B0003 for ; Mon, 10 Aug 2020 22:25:06 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 9437445B3 for ; Tue, 11 Aug 2020 02:25:06 +0000 (UTC) X-FDA: 77136695412.16.cow29_27009fe26fdf Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id 64A79100E6903 for ; Tue, 11 Aug 2020 02:25:06 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,wuyun.wu@huawei.com,,RULES_HIT:30054,0,RBL:45.249.212.190:@huawei.com:.lbl8.mailshell.net-64.95.201.95 62.18.2.100;04y8rs85urqmqrc8hzdx5d1coqzaiypzm8abcku87j5khz11bwumboidx9w9rmf.bx3hrjsjtby5emfkdny6bmqc6os767n4w1nof6pj69jjwgi3saopntug6rue69k.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: cow29_27009fe26fdf X-Filterd-Recvd-Size: 2258 Received: from huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Tue, 11 Aug 2020 02:25:05 +0000 (UTC) Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 3CB13EA7F5F601B280FC; Tue, 11 Aug 2020 10:24:57 +0800 (CST) Received: from DESKTOP-A9S207P.china.huawei.com (10.174.179.61) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.487.0; Tue, 11 Aug 2020 10:24:51 +0800 From: To: Christoph Lameter , Pekka Enberg , "David Rientjes" , Joonsoo Kim , "Andrew Morton" CC: , , Abel Wu , "open list:SLAB ALLOCATOR" , "open list" Subject: [PATCH] mm/slub: fix missing ALLOC_SLOWPATH stat when bulk alloc Date: Tue, 11 Aug 2020 10:24:24 +0800 Message-ID: <20200811022427.1363-1-wuyun.wu@huawei.com> X-Mailer: git-send-email 2.28.0.windows.1 MIME-Version: 1.0 X-Originating-IP: [10.174.179.61] X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 64A79100E6903 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Abel Wu The ALLOC_SLOWPATH statistics is missing in bulk allocation now. Fix it by doing statistics in alloc slow path. Signed-off-by: Abel Wu Reviewed-by: Pekka Enberg Acked-by: David Rientjes --- mm/slub.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/slub.c b/mm/slub.c index df93a5a0e9a4..5d89e4064f83 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2600,6 +2600,8 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, void *freelist; struct page *page; + stat(s, ALLOC_SLOWPATH); + page = c->page; if (!page) { /* @@ -2788,7 +2790,6 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, page = c->page; if (unlikely(!object || !node_match(page, node))) { object = __slab_alloc(s, gfpflags, node, addr, c); - stat(s, ALLOC_SLOWPATH); } else { void *next_object = get_freepointer_safe(s, object);