From patchwork Fri Jan 31 15:00:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kirill Tkhai X-Patchwork-Id: 11360007 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5823B921 for ; Fri, 31 Jan 2020 15:01:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2B02B24681 for ; Fri, 31 Jan 2020 15:01:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2B02B24681 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=virtuozzo.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5B6C66B05BA; Fri, 31 Jan 2020 10:01:07 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 567AA6B05BB; Fri, 31 Jan 2020 10:01:07 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 456136B05BD; Fri, 31 Jan 2020 10:01:07 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0208.hostedemail.com [216.40.44.208]) by kanga.kvack.org (Postfix) with ESMTP id 2E58B6B05BA for ; Fri, 31 Jan 2020 10:01:07 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id C10CE824805A for ; Fri, 31 Jan 2020 15:01:06 +0000 (UTC) X-FDA: 76438242132.09.paint16_7df7e09c1750b X-Spam-Summary: 2,0,0,34a2be5b0b715af4,d41d8cd98f00b204,ktkhai@virtuozzo.com,:david@redhat.com:akpm@linux-foundation.org:mhocko@kernel.org:hannes@cmpxchg.org:shakeelb@google.com:vdavydov.dev@gmail.com::linux-kernel@vger.kernel.org,RULES_HIT:41:152:355:379:800:854:960:966:973:988:989:1260:1277:1311:1313:1314:1345:1359:1381:1437:1515:1516:1518:1534:1542:1593:1594:1711:1730:1747:1777:1792:1981:2194:2196:2199:2200:2393:2553:2559:2562:2693:2741:3138:3139:3140:3141:3142:3353:3866:3867:3870:3871:4250:4321:4385:5007:6119:6261:7576:7903:8603:9592:10004:10400:11026:11473:11658:11914:12043:12048:12296:12297:12438:12555:12679:12760:12986:14096:14097:14181:14394:14721:21080:21451:21627:21990:30054:30090,0,RBL:185.231.240.75:@virtuozzo.com:.lbl8.mailshell.net-62.2.3.100 64.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:39,LUA_SUMMARY:none X-HE-Tag: paint16_7df7e09c1750b X-Filterd-Recvd-Size: 3510 Received: from relay.sw.ru (relay.sw.ru [185.231.240.75]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Fri, 31 Jan 2020 15:01:05 +0000 (UTC) Received: from dhcp-172-16-24-104.sw.ru ([172.16.24.104]) by relay.sw.ru with esmtp (Exim 4.92.3) (envelope-from ) id 1ixXmu-0002Qt-Vb; Fri, 31 Jan 2020 18:00:53 +0300 Subject: [PATCH v2] mm: Allocate shrinker_map on appropriate NUMA node To: David Hildenbrand , akpm@linux-foundation.org, mhocko@kernel.org, hannes@cmpxchg.org, shakeelb@google.com, vdavydov.dev@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <158047248934.390127.5043060848569612747.stgit@localhost.localdomain> From: Kirill Tkhai Message-ID: <5f3fc9a9-9a22-ccc3-5971-9783b60807bc@virtuozzo.com> Date: Fri, 31 Jan 2020 18:00:51 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.4.2 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: mm: Allocate shrinker_map on appropriate NUMA node From: Kirill Tkhai Despite shrinker_map may be touched from any cpu (e.g., a bit there may be set by a task running everywhere); kswapd is always bound to specific node. So, we will allocate shrinker_map from related NUMA node to respect its NUMA locality. Also, this follows generic way we use for allocation memcg's per-node data. Two hunks node_state() patterns are borrowed from alloc_mem_cgroup_per_node_info(). v2: Use NUMA_NO_NODE instead of -1 Signed-off-by: Kirill Tkhai --- mm/memcontrol.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 6f6dc8712e39..20700ad25373 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -323,7 +323,7 @@ static int memcg_expand_one_shrinker_map(struct mem_cgroup *memcg, int size, int old_size) { struct memcg_shrinker_map *new, *old; - int nid; + int nid, tmp; lockdep_assert_held(&memcg_shrinker_map_mutex); @@ -333,8 +333,9 @@ static int memcg_expand_one_shrinker_map(struct mem_cgroup *memcg, /* Not yet online memcg */ if (!old) return 0; - - new = kvmalloc(sizeof(*new) + size, GFP_KERNEL); + /* See comment in alloc_mem_cgroup_per_node_info()*/ + tmp = node_state(nid, N_NORMAL_MEMORY) ? nid : NUMA_NO_NODE; + new = kvmalloc_node(sizeof(*new) + size, GFP_KERNEL, tmp); if (!new) return -ENOMEM; @@ -370,7 +371,7 @@ static void memcg_free_shrinker_maps(struct mem_cgroup *memcg) static int memcg_alloc_shrinker_maps(struct mem_cgroup *memcg) { struct memcg_shrinker_map *map; - int nid, size, ret = 0; + int nid, size, tmp, ret = 0; if (mem_cgroup_is_root(memcg)) return 0; @@ -378,7 +379,9 @@ static int memcg_alloc_shrinker_maps(struct mem_cgroup *memcg) mutex_lock(&memcg_shrinker_map_mutex); size = memcg_shrinker_map_size; for_each_node(nid) { - map = kvzalloc(sizeof(*map) + size, GFP_KERNEL); + /* See comment in alloc_mem_cgroup_per_node_info()*/ + tmp = node_state(nid, N_NORMAL_MEMORY) ? nid : NUMA_NO_NODE; + map = kvzalloc_node(sizeof(*map) + size, GFP_KERNEL, tmp); if (!map) { memcg_free_shrinker_maps(memcg); ret = -ENOMEM;