From patchwork Fri Apr 3 17:30:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 11473209 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CBFB0913 for ; Fri, 3 Apr 2020 17:31:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8666420B1F for ; Fri, 3 Apr 2020 17:31:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="fAay2PCE" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8666420B1F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B40988E0008; Fri, 3 Apr 2020 13:31:04 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id ACAE28E0007; Fri, 3 Apr 2020 13:31:04 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 992AD8E0008; Fri, 3 Apr 2020 13:31:04 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0172.hostedemail.com [216.40.44.172]) by kanga.kvack.org (Postfix) with ESMTP id 7CD128E0007 for ; Fri, 3 Apr 2020 13:31:04 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 2BEA58245571 for ; Fri, 3 Apr 2020 17:31:04 +0000 (UTC) X-FDA: 76667234448.10.sock47_946523b7321b X-Spam-Summary: 2,0,0,5a78d33acb93269a,d41d8cd98f00b204,urezki@gmail.com,,RULES_HIT:2:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1437:1515:1535:1605:1730:1747:1777:1792:2194:2196:2199:2200:2393:2559:2562:2693:3138:3139:3140:3141:3142:3865:3866:3867:3870:3871:3872:3874:4049:4120:4250:4385:5007:6117:6119:6261:6653:7514:8603:9040:9413:9592:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12683:12895:13161:13229:13894:13972:14394:14687:21080:21433:21444:21451:21627:21666:21740:21789:21987:21990:30029:30034:30054,0,RBL:209.85.208.196:@gmail.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: sock47_946523b7321b X-Filterd-Recvd-Size: 9612 Received: from mail-lj1-f196.google.com (mail-lj1-f196.google.com [209.85.208.196]) by imf39.hostedemail.com (Postfix) with ESMTP for ; Fri, 3 Apr 2020 17:31:03 +0000 (UTC) Received: by mail-lj1-f196.google.com with SMTP id r24so7781967ljd.4 for ; Fri, 03 Apr 2020 10:31:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=MVqBZ94a4HEstbBLSWVTsFmo6x6cOoQ//AL6Z+R7tCU=; b=fAay2PCE6inyHjltPP6nyM1oOF+qQlVoNXAdmxUCyzZ8AJO7z2635/HnBPrXeQis2u Jc17f6memMmt/SVpzRMd6x6y5utyBBwAZKcfMuFIy1540+FD50t3qqTAUVb9vpUMxCbA KPR6ImC39fIP/nArf+HPyuVPLUWrjRqj/EnksJBVLlB5dyuAzziZR4yCQN5GW9+cDRsy GKAFHt6bmaxAPAW6Qn+LLXaJD6GZESgSnBFSDrcw6LbNu9EaxrS7ximNF5LjwnwTyLUD zRRIv9N8Ve7arXnEbegecU93po19tRVbDVzloJPAQ3xop9aY8DPSsjHo1quxJLx6EdcD w55g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=MVqBZ94a4HEstbBLSWVTsFmo6x6cOoQ//AL6Z+R7tCU=; b=QS3hTukvtGhtBz0QQzarOoGEqe/0bqHLTJ4N9L2x72NZCzrWtZQ31dyr2pcYow3PGo /YnL3oMW+//eIBkUCQIg79fnb4oD9iIuJQKvHZ55vuzW4PU9gWA8AR6FHbPxpQrfy7EM fRSzFl63I7QacYBJj4hKJlPU7dMfzF970AU757dflbwfo9mNvg61VYnqgYCWYn9lErBP q/AE0vu1MMeHswutvmhWqbD6H6SN2RHYKj3b53OIdF7GDjf+JwctQDi4Fn4gGYVWzHAY gMqDwGkG/l7o/z0T9RmemvJtqAczVo6J/jy6qkdHqf8IEXtOPynO0O76AZX3GTUFapSi SCoA== X-Gm-Message-State: AGi0PuYKfTa0CfYIvCiGO72lUHFiyL/+jdqWxfLdgYlbCwwJSk/W55R0 Dv7Y869JYBpAfWLVn+KHzpw= X-Google-Smtp-Source: APiQypI4NRa4Jcb9XKuzs/Ix5nIkcm9i/uSYAufTGJYbbcDZzrVP5NWyS/lI2TyFlK+32XesnYs/Mg== X-Received: by 2002:a2e:b611:: with SMTP id r17mr5543922ljn.62.1585935060470; Fri, 03 Apr 2020 10:31:00 -0700 (PDT) Received: from pc636.lan (h5ef52e31.seluork.dyn.perspektivbredband.net. [94.245.46.49]) by smtp.gmail.com with ESMTPSA id z9sm7237112lfh.45.2020.04.03.10.30.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Apr 2020 10:30:59 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: LKML , "Paul E . McKenney" , Joel Fernandes Cc: RCU , linux-mm@kvack.org, Andrew Morton , Uladzislau Rezki , Steven Rostedt , Oleksiy Avramchenko Subject: [PATCH 1/1] rcu/tree: add emergency pool for headless case Date: Fri, 3 Apr 2020 19:30:51 +0200 Message-Id: <20200403173051.4081-1-urezki@gmail.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Maintain an emergency pool for each CPU with some extra objects. There is read-only sysfs attribute, the name is "rcu_nr_emergency_objs". It reflects the size of the pool. As for now the default value is 3. The pool is populated when low memory condition is detected. Please note it is only for headless case it means when the regular SLAB is not able to serve any request, the pool is used. Signed-off-by: Uladzislau Rezki (Sony) Signed-off-by: Joel Fernandes (Google) --- kernel/rcu/tree.c | 133 +++++++++++++++++++++++++++++++++------------- 1 file changed, 97 insertions(+), 36 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 5e26145e9ead..f9f1f935ab0b 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -114,6 +114,14 @@ int rcu_num_lvls __read_mostly = RCU_NUM_LVLS; int rcu_kfree_nowarn; module_param(rcu_kfree_nowarn, int, 0444); +/* + * For headless variant. Under memory pressure an + * emergency pool can be used if the regular SLAB + * is not able to serve some memory for us. + */ +int rcu_nr_emergency_objs = 3; +module_param(rcu_nr_emergency_objs, int, 0444); + /* Number of rcu_nodes at specified level. */ int num_rcu_lvl[] = NUM_RCU_LVL_INIT; int rcu_num_nodes __read_mostly = NUM_RCU_NODES; /* Total # rcu_nodes in use. */ @@ -2877,6 +2885,12 @@ struct kfree_rcu_cpu { bool initialized; // Number of objects for which GP not started int count; + + /* + * Reserved emergency pool for headless variant. + */ + int nr_emergency; + void **emergency; }; static DEFINE_PER_CPU(struct kfree_rcu_cpu, krc); @@ -2892,6 +2906,27 @@ debug_rcu_bhead_unqueue(struct kvfree_rcu_bulk_data *bhead) #endif } +static inline struct kfree_rcu_cpu * +krc_this_cpu_lock(unsigned long *flags) +{ + struct kfree_rcu_cpu *krcp; + + local_irq_save(*flags); // For safely calling this_cpu_ptr(). + krcp = this_cpu_ptr(&krc); + if (likely(krcp->initialized)) + spin_lock(&krcp->lock); + + return krcp; +} + +static inline void +krc_this_cpu_unlock(struct kfree_rcu_cpu *krcp, unsigned long flags) +{ + if (likely(krcp->initialized)) + spin_unlock(&krcp->lock); + local_irq_restore(flags); +} + /* * This function is invoked in workqueue context after a grace period. * It frees all the objects queued on ->bhead_free or ->head_free. @@ -2974,6 +3009,7 @@ static void kfree_rcu_work(struct work_struct *work) */ for (; head; head = next) { unsigned long offset = (unsigned long)head->func; + unsigned long flags; bool headless; void *ptr; @@ -2991,10 +3027,23 @@ static void kfree_rcu_work(struct work_struct *work) trace_rcu_invoke_kvfree_callback(rcu_state.name, head, offset); if (!WARN_ON_ONCE(!__is_kvfree_rcu_offset(offset))) { - if (headless) + if (headless) { kvfree((void *) *((unsigned long *) ptr)); - kvfree(ptr); + krcp = krc_this_cpu_lock(&flags); + if (krcp->emergency) { + if (krcp->nr_emergency < rcu_nr_emergency_objs) { + krcp->emergency[krcp->nr_emergency++] = ptr; + + /* Bypass freeing of it, it is in emergency pool. */ + ptr = NULL; + } + } + krc_this_cpu_unlock(krcp, flags); + } + + if (ptr) + kvfree(ptr); } rcu_lock_release(&rcu_callback_map); @@ -3144,40 +3193,26 @@ kvfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp, void *ptr) return true; } -static inline struct rcu_head *attach_rcu_head_to_object(void *obj) +static inline struct rcu_head * +set_ptr_in_rcu_head_obj(void *ptr, unsigned long *rho) +{ + rho[0] = (unsigned long) ptr; + return ((struct rcu_head *) ++rho); +} + +static inline struct rcu_head * +alloc_rcu_head_obj(void *ptr) { - unsigned long *ptr; + unsigned long *rho; /* Try hard to get the memory. */ - ptr = kmalloc(sizeof(unsigned long *) + + rho = kmalloc(sizeof(unsigned long *) + sizeof(struct rcu_head), GFP_KERNEL | __GFP_ATOMIC | __GFP_HIGH | __GFP_RETRY_MAYFAIL); - if (!ptr) + if (!rho) return NULL; - ptr[0] = (unsigned long) obj; - return ((struct rcu_head *) ++ptr); -} - -static inline struct kfree_rcu_cpu * -krc_this_cpu_lock(unsigned long *flags) -{ - struct kfree_rcu_cpu *krcp; - - local_irq_save(*flags); // For safely calling this_cpu_ptr(). - krcp = this_cpu_ptr(&krc); - if (likely(krcp->initialized)) - spin_lock(&krcp->lock); - - return krcp; -} - -static inline void -krc_this_cpu_unlock(struct kfree_rcu_cpu *krcp, unsigned long flags) -{ - if (likely(krcp->initialized)) - spin_unlock(&krcp->lock); - local_irq_restore(flags); + return set_ptr_in_rcu_head_obj(ptr, rho); } /* @@ -3237,15 +3272,31 @@ void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func) if (!success) { /* Is headless object? */ if (head == NULL) { - /* Drop the lock. */ + /* + * Drop the lock to use more permissive + * parameters, after that take it back. + */ krc_this_cpu_unlock(krcp, flags); + head = alloc_rcu_head_obj(ptr); + krcp = krc_this_cpu_lock(&flags); - head = attach_rcu_head_to_object(ptr); - if (head == NULL) - goto inline_return; + /* + * Use emergency pool if still fails. + */ + if (head == NULL) { + if (!krcp->nr_emergency) + goto unlock_return; - /* Take it back. */ - krcp = krc_this_cpu_lock(&flags); + head = set_ptr_in_rcu_head_obj(ptr, + krcp->emergency[--krcp->nr_emergency]); + + /* + * We do not need to do it. But just in case + * let's set the pulled slot to NULL to avoid + * magic issues. + */ + krcp->emergency[krcp->nr_emergency] = NULL; + } /* * Tag the headless object. Such objects have a back-pointer @@ -3282,7 +3333,6 @@ void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func) unlock_return: krc_this_cpu_unlock(krcp, flags); -inline_return: /* * High memory pressure, so inline kvfree() after * synchronize_rcu(). We can do it from might_sleep() @@ -4272,6 +4322,17 @@ static void __init kfree_rcu_batch_init(void) } INIT_DELAYED_WORK(&krcp->monitor_work, kfree_rcu_monitor); + + /* + * The poll will be populated when low memory condition + * is detected. Therefore we do not fill it in here. + */ + krcp->emergency = kmalloc_array(rcu_nr_emergency_objs, + sizeof(void *), GFP_NOWAIT); + + if (!krcp->emergency) + pr_err("Failed to create emergency pool for %d CPU!\n", cpu); + krcp->initialized = true; } if (register_shrinker(&kfree_rcu_shrinker))