From patchwork Fri Aug 19 20:48:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 12949184 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64EA4C32773 for ; Fri, 19 Aug 2022 20:49:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351380AbiHSUtM (ORCPT ); Fri, 19 Aug 2022 16:49:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48400 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351235AbiHSUtL (ORCPT ); Fri, 19 Aug 2022 16:49:11 -0400 Received: from mail-qk1-x732.google.com (mail-qk1-x732.google.com [IPv6:2607:f8b0:4864:20::732]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 570BA9F766 for ; Fri, 19 Aug 2022 13:49:06 -0700 (PDT) Received: by mail-qk1-x732.google.com with SMTP id g21so4111272qka.5 for ; Fri, 19 Aug 2022 13:49:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=ILmAvxERz1/ytVnxeMqVKkdEB1g5Djdc+NIIzxTOQ40=; b=yFjBG1KITkc2WAfL6WSOP5dJiuyUBHowRj+2m7my3ryrfJdJidXg59rULt6GyPplBT yVwiSaJcgNvlTqPSxdMsc3H715JgmpFJe/pTGCK1cQjOxPFvDFA5JbbjJa+/x4PqgV/j Kclt/cCS8M9Nw4BDG2Y+vN1gcmZFeFGdjZrBM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=ILmAvxERz1/ytVnxeMqVKkdEB1g5Djdc+NIIzxTOQ40=; b=YlECuQbOsRwpH8tq90K9YfSLGbllJQxPAiJndSTUq7WqBnOvAw+Ju3SSq+MCbTgua/ 7v96v43fYlLk0AuktCcET8OmZMDFvprwoF0Wk5SOGfq7CIgjY+yv8Akc7lLI+nTOhrlc pLjk2uKJQ3YSz2oggm7L0xCRNmcyjWAjPuStOssbTbpG5Kx5/+WQxK/uUi3XFhdg2Lg0 vqP1mY6Z5jNNadUDUtzIOYZeWVJnzD+++sM95C9wH7CY1BZ7caxONwWvyoJFe25Nupcx Z904Xz9OKNRGyf6m6HF2qQwobZq8ctnyWYMADmihOeSO7V428m34b4+OL/hEvw/116wF XNjA== X-Gm-Message-State: ACgBeo0B22SvVDMgufyrW/LabUpFUN8da2hCW+Sd+X2Z39wbCETpKMRQ cUZ/ItGtyN5k97JQn+oi/yetvA== X-Google-Smtp-Source: AA6agR4EIRzXBKUGlMn+4bECNF9pq0vHDHd/HAZIGKipqVe1JGzdPCURwI/bfq2Chkroa7xHcbranw== X-Received: by 2002:a05:620a:151:b0:6ba:e711:eca9 with SMTP id e17-20020a05620a015100b006bae711eca9mr6293196qkn.385.1660942145153; Fri, 19 Aug 2022 13:49:05 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:04 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , Paul McKenney , Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 01/14] rcu: Introduce call_rcu_lazy() API implementation Date: Fri, 19 Aug 2022 20:48:44 +0000 Message-Id: <20220819204857.3066329-2-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Implement timer-based RCU lazy callback batching. The batch is flushed whenever a certain amount of time has passed, or the batch on a particular CPU grows too big. Also memory pressure will flush it in a future patch. To handle several corner cases automagically (such as rcu_barrier() and hotplug), we re-use bypass lists to handle lazy CBs. The bypass list length has the lazy CB length included in it. A separate lazy CB length counter is also introduced to keep track of the number of lazy CBs. Suggested-by: Paul McKenney Signed-off-by: Joel Fernandes (Google) --- include/linux/rcu_segcblist.h | 1 + include/linux/rcupdate.h | 6 + kernel/rcu/Kconfig | 8 ++ kernel/rcu/rcu.h | 11 ++ kernel/rcu/rcu_segcblist.c | 15 ++- kernel/rcu/rcu_segcblist.h | 20 +++- kernel/rcu/tree.c | 130 ++++++++++++++-------- kernel/rcu/tree.h | 10 +- kernel/rcu/tree_nocb.h | 199 ++++++++++++++++++++++++++-------- 9 files changed, 301 insertions(+), 99 deletions(-) diff --git a/include/linux/rcu_segcblist.h b/include/linux/rcu_segcblist.h index 659d13a7ddaa..9a992707917b 100644 --- a/include/linux/rcu_segcblist.h +++ b/include/linux/rcu_segcblist.h @@ -22,6 +22,7 @@ struct rcu_cblist { struct rcu_head *head; struct rcu_head **tail; long len; + long lazy_len; }; #define RCU_CBLIST_INITIALIZER(n) { .head = NULL, .tail = &n.head } diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 1a32036c918c..9191a3d88087 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -82,6 +82,12 @@ static inline int rcu_preempt_depth(void) #endif /* #else #ifdef CONFIG_PREEMPT_RCU */ +#ifdef CONFIG_RCU_LAZY +void call_rcu_lazy(struct rcu_head *head, rcu_callback_t func); +#else +#define call_rcu_lazy(head, func) call_rcu(head, func) +#endif + /* Internal to kernel */ void rcu_init(void); extern int rcu_scheduler_active; diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig index 27aab870ae4c..779b6e84006b 100644 --- a/kernel/rcu/Kconfig +++ b/kernel/rcu/Kconfig @@ -293,4 +293,12 @@ config TASKS_TRACE_RCU_READ_MB Say N here if you hate read-side memory barriers. Take the default if you are unsure. +config RCU_LAZY + bool "RCU callback lazy invocation functionality" + depends on RCU_NOCB_CPU + default n + help + To save power, batch RCU callbacks and flush after delay, memory + pressure or callback list growing too big. + endmenu # "RCU Subsystem" diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h index 4916077119f3..608f6ab76c7f 100644 --- a/kernel/rcu/rcu.h +++ b/kernel/rcu/rcu.h @@ -463,6 +463,14 @@ enum rcutorture_type { INVALID_RCU_FLAVOR }; +#if defined(CONFIG_RCU_LAZY) +unsigned long rcu_lazy_get_jiffies_till_flush(void); +void rcu_lazy_set_jiffies_till_flush(unsigned long j); +#else +static inline unsigned long rcu_lazy_get_jiffies_till_flush(void) { return 0; } +static inline void rcu_lazy_set_jiffies_till_flush(unsigned long j) { } +#endif + #if defined(CONFIG_TREE_RCU) void rcutorture_get_gp_data(enum rcutorture_type test_type, int *flags, unsigned long *gp_seq); @@ -472,6 +480,8 @@ void do_trace_rcu_torture_read(const char *rcutorturename, unsigned long c_old, unsigned long c); void rcu_gp_set_torture_wait(int duration); +void rcu_force_call_rcu_to_lazy(bool force); + #else static inline void rcutorture_get_gp_data(enum rcutorture_type test_type, int *flags, unsigned long *gp_seq) @@ -490,6 +500,7 @@ void do_trace_rcu_torture_read(const char *rcutorturename, do { } while (0) #endif static inline void rcu_gp_set_torture_wait(int duration) { } +static inline void rcu_force_call_rcu_to_lazy(bool force) { } #endif #if IS_ENABLED(CONFIG_RCU_TORTURE_TEST) || IS_MODULE(CONFIG_RCU_TORTURE_TEST) diff --git a/kernel/rcu/rcu_segcblist.c b/kernel/rcu/rcu_segcblist.c index c54ea2b6a36b..776647cd2d6c 100644 --- a/kernel/rcu/rcu_segcblist.c +++ b/kernel/rcu/rcu_segcblist.c @@ -20,16 +20,21 @@ void rcu_cblist_init(struct rcu_cblist *rclp) rclp->head = NULL; rclp->tail = &rclp->head; rclp->len = 0; + rclp->lazy_len = 0; } /* * Enqueue an rcu_head structure onto the specified callback list. */ -void rcu_cblist_enqueue(struct rcu_cblist *rclp, struct rcu_head *rhp) +void rcu_cblist_enqueue(struct rcu_cblist *rclp, struct rcu_head *rhp, + bool lazy) { *rclp->tail = rhp; rclp->tail = &rhp->next; WRITE_ONCE(rclp->len, rclp->len + 1); + + if (IS_ENABLED(CONFIG_RCU_LAZY) && lazy) + WRITE_ONCE(rclp->lazy_len, rclp->lazy_len + 1); } /* @@ -38,11 +43,12 @@ void rcu_cblist_enqueue(struct rcu_cblist *rclp, struct rcu_head *rhp) * element of the second rcu_cblist structure, but ensuring that the second * rcu_cblist structure, if initially non-empty, always appears non-empty * throughout the process. If rdp is NULL, the second rcu_cblist structure - * is instead initialized to empty. + * is instead initialized to empty. Also account for lazy_len for lazy CBs. */ void rcu_cblist_flush_enqueue(struct rcu_cblist *drclp, struct rcu_cblist *srclp, - struct rcu_head *rhp) + struct rcu_head *rhp, + bool lazy) { drclp->head = srclp->head; if (drclp->head) @@ -58,6 +64,9 @@ void rcu_cblist_flush_enqueue(struct rcu_cblist *drclp, srclp->tail = &rhp->next; WRITE_ONCE(srclp->len, 1); } + + if (IS_ENABLED(CONFIG_RCU_LAZY) && rhp && lazy) + WRITE_ONCE(srclp->lazy_len, 1); } /* diff --git a/kernel/rcu/rcu_segcblist.h b/kernel/rcu/rcu_segcblist.h index 431cee212467..8e90b34adb00 100644 --- a/kernel/rcu/rcu_segcblist.h +++ b/kernel/rcu/rcu_segcblist.h @@ -15,14 +15,30 @@ static inline long rcu_cblist_n_cbs(struct rcu_cblist *rclp) return READ_ONCE(rclp->len); } +/* Return number of callbacks in the specified callback list. */ +static inline long rcu_cblist_n_lazy_cbs(struct rcu_cblist *rclp) +{ + if (IS_ENABLED(CONFIG_RCU_LAZY)) + return READ_ONCE(rclp->lazy_len); + return 0; +} + +static inline void rcu_cblist_reset_lazy_len(struct rcu_cblist *rclp) +{ + if (IS_ENABLED(CONFIG_RCU_LAZY)) + WRITE_ONCE(rclp->lazy_len, 0); +} + /* Return number of callbacks in segmented callback list by summing seglen. */ long rcu_segcblist_n_segment_cbs(struct rcu_segcblist *rsclp); void rcu_cblist_init(struct rcu_cblist *rclp); -void rcu_cblist_enqueue(struct rcu_cblist *rclp, struct rcu_head *rhp); +void rcu_cblist_enqueue(struct rcu_cblist *rclp, struct rcu_head *rhp, + bool lazy); void rcu_cblist_flush_enqueue(struct rcu_cblist *drclp, struct rcu_cblist *srclp, - struct rcu_head *rhp); + struct rcu_head *rhp, + bool lazy); struct rcu_head *rcu_cblist_dequeue(struct rcu_cblist *rclp); /* diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index c25ba442044a..e76fef8031be 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3058,47 +3058,8 @@ static void check_cb_ovld(struct rcu_data *rdp) raw_spin_unlock_rcu_node(rnp); } -/** - * call_rcu() - Queue an RCU callback for invocation after a grace period. - * @head: structure to be used for queueing the RCU updates. - * @func: actual callback function to be invoked after the grace period - * - * The callback function will be invoked some time after a full grace - * period elapses, in other words after all pre-existing RCU read-side - * critical sections have completed. However, the callback function - * might well execute concurrently with RCU read-side critical sections - * that started after call_rcu() was invoked. - * - * RCU read-side critical sections are delimited by rcu_read_lock() - * and rcu_read_unlock(), and may be nested. In addition, but only in - * v5.0 and later, regions of code across which interrupts, preemption, - * or softirqs have been disabled also serve as RCU read-side critical - * sections. This includes hardware interrupt handlers, softirq handlers, - * and NMI handlers. - * - * Note that all CPUs must agree that the grace period extended beyond - * all pre-existing RCU read-side critical section. On systems with more - * than one CPU, this means that when "func()" is invoked, each CPU is - * guaranteed to have executed a full memory barrier since the end of its - * last RCU read-side critical section whose beginning preceded the call - * to call_rcu(). It also means that each CPU executing an RCU read-side - * critical section that continues beyond the start of "func()" must have - * executed a memory barrier after the call_rcu() but before the beginning - * of that RCU read-side critical section. Note that these guarantees - * include CPUs that are offline, idle, or executing in user mode, as - * well as CPUs that are executing in the kernel. - * - * Furthermore, if CPU A invoked call_rcu() and CPU B invoked the - * resulting RCU callback function "func()", then both CPU A and CPU B are - * guaranteed to execute a full memory barrier during the time interval - * between the call to call_rcu() and the invocation of "func()" -- even - * if CPU A and CPU B are the same CPU (but again only if the system has - * more than one CPU). - * - * Implementation of these memory-ordering guarantees is described here: - * Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst. - */ -void call_rcu(struct rcu_head *head, rcu_callback_t func) +static void +__call_rcu_common(struct rcu_head *head, rcu_callback_t func, bool lazy) { static atomic_t doublefrees; unsigned long flags; @@ -3139,7 +3100,7 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func) } check_cb_ovld(rdp); - if (rcu_nocb_try_bypass(rdp, head, &was_alldone, flags)) + if (rcu_nocb_try_bypass(rdp, head, &was_alldone, flags, lazy)) return; // Enqueued onto ->nocb_bypass, so just leave. // If no-CBs CPU gets here, rcu_nocb_try_bypass() acquired ->nocb_lock. rcu_segcblist_enqueue(&rdp->cblist, head); @@ -3161,8 +3122,86 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func) local_irq_restore(flags); } } -EXPORT_SYMBOL_GPL(call_rcu); +#ifdef CONFIG_RCU_LAZY +/** + * call_rcu_lazy() - Lazily queue RCU callback for invocation after grace period. + * @head: structure to be used for queueing the RCU updates. + * @func: actual callback function to be invoked after the grace period + * + * The callback function will be invoked some time after a full grace + * period elapses, in other words after all pre-existing RCU read-side + * critical sections have completed. + * + * Use this API instead of call_rcu() if you don't mind the callback being + * invoked after very long periods of time on systems without memory pressure + * and on systems which are lightly loaded or mostly idle. + * + * Other than the extra delay in callbacks being invoked, this function is + * identical to, and reuses call_rcu()'s logic. Refer to call_rcu() for more + * details about memory ordering and other functionality. + */ +void call_rcu_lazy(struct rcu_head *head, rcu_callback_t func) +{ + return __call_rcu_common(head, func, true); +} +EXPORT_SYMBOL_GPL(call_rcu_lazy); +#endif + +static bool force_call_rcu_to_lazy; + +void rcu_force_call_rcu_to_lazy(bool force) +{ + if (IS_ENABLED(CONFIG_RCU_SCALE_TEST)) + WRITE_ONCE(force_call_rcu_to_lazy, force); +} +EXPORT_SYMBOL_GPL(rcu_force_call_rcu_to_lazy); + +/** + * call_rcu() - Queue an RCU callback for invocation after a grace period. + * @head: structure to be used for queueing the RCU updates. + * @func: actual callback function to be invoked after the grace period + * + * The callback function will be invoked some time after a full grace + * period elapses, in other words after all pre-existing RCU read-side + * critical sections have completed. However, the callback function + * might well execute concurrently with RCU read-side critical sections + * that started after call_rcu() was invoked. + * + * RCU read-side critical sections are delimited by rcu_read_lock() + * and rcu_read_unlock(), and may be nested. In addition, but only in + * v5.0 and later, regions of code across which interrupts, preemption, + * or softirqs have been disabled also serve as RCU read-side critical + * sections. This includes hardware interrupt handlers, softirq handlers, + * and NMI handlers. + * + * Note that all CPUs must agree that the grace period extended beyond + * all pre-existing RCU read-side critical section. On systems with more + * than one CPU, this means that when "func()" is invoked, each CPU is + * guaranteed to have executed a full memory barrier since the end of its + * last RCU read-side critical section whose beginning preceded the call + * to call_rcu(). It also means that each CPU executing an RCU read-side + * critical section that continues beyond the start of "func()" must have + * executed a memory barrier after the call_rcu() but before the beginning + * of that RCU read-side critical section. Note that these guarantees + * include CPUs that are offline, idle, or executing in user mode, as + * well as CPUs that are executing in the kernel. + * + * Furthermore, if CPU A invoked call_rcu() and CPU B invoked the + * resulting RCU callback function "func()", then both CPU A and CPU B are + * guaranteed to execute a full memory barrier during the time interval + * between the call to call_rcu() and the invocation of "func()" -- even + * if CPU A and CPU B are the same CPU (but again only if the system has + * more than one CPU). + * + * Implementation of these memory-ordering guarantees is described here: + * Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst. + */ +void call_rcu(struct rcu_head *head, rcu_callback_t func) +{ + return __call_rcu_common(head, func, force_call_rcu_to_lazy); +} +EXPORT_SYMBOL_GPL(call_rcu); /* Maximum number of jiffies to wait before draining a batch. */ #define KFREE_DRAIN_JIFFIES (HZ / 50) @@ -4056,7 +4095,8 @@ static void rcu_barrier_entrain(struct rcu_data *rdp) rdp->barrier_head.func = rcu_barrier_callback; debug_rcu_head_queue(&rdp->barrier_head); rcu_nocb_lock(rdp); - WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies)); + WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies, false, + /* wake gp thread */ true)); if (rcu_segcblist_entrain(&rdp->cblist, &rdp->barrier_head)) { atomic_inc(&rcu_state.barrier_cpu_count); } else { @@ -4476,7 +4516,7 @@ void rcutree_migrate_callbacks(int cpu) my_rdp = this_cpu_ptr(&rcu_data); my_rnp = my_rdp->mynode; rcu_nocb_lock(my_rdp); /* irqs already disabled. */ - WARN_ON_ONCE(!rcu_nocb_flush_bypass(my_rdp, NULL, jiffies)); + WARN_ON_ONCE(!rcu_nocb_flush_bypass(my_rdp, NULL, jiffies, false, false)); raw_spin_lock_rcu_node(my_rnp); /* irqs already disabled. */ /* Leverage recent GPs and set GP for new callbacks. */ needwake = rcu_advance_cbs(my_rnp, rdp) || diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h index 2ccf5845957d..7b1ddee6a159 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h @@ -267,8 +267,9 @@ struct rcu_data { /* Values for nocb_defer_wakeup field in struct rcu_data. */ #define RCU_NOCB_WAKE_NOT 0 #define RCU_NOCB_WAKE_BYPASS 1 -#define RCU_NOCB_WAKE 2 -#define RCU_NOCB_WAKE_FORCE 3 +#define RCU_NOCB_WAKE_LAZY 2 +#define RCU_NOCB_WAKE 3 +#define RCU_NOCB_WAKE_FORCE 4 #define RCU_JIFFIES_TILL_FORCE_QS (1 + (HZ > 250) + (HZ > 500)) /* For jiffies_till_first_fqs and */ @@ -436,9 +437,10 @@ static struct swait_queue_head *rcu_nocb_gp_get(struct rcu_node *rnp); static void rcu_nocb_gp_cleanup(struct swait_queue_head *sq); static void rcu_init_one_nocb(struct rcu_node *rnp); static bool rcu_nocb_flush_bypass(struct rcu_data *rdp, struct rcu_head *rhp, - unsigned long j); + unsigned long j, bool lazy, bool wakegp); static bool rcu_nocb_try_bypass(struct rcu_data *rdp, struct rcu_head *rhp, - bool *was_alldone, unsigned long flags); + bool *was_alldone, unsigned long flags, + bool lazy); static void __call_rcu_nocb_wake(struct rcu_data *rdp, bool was_empty, unsigned long flags); static int rcu_nocb_need_deferred_wakeup(struct rcu_data *rdp, int level); diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h index e369efe94fda..55636da76bc2 100644 --- a/kernel/rcu/tree_nocb.h +++ b/kernel/rcu/tree_nocb.h @@ -256,6 +256,31 @@ static bool wake_nocb_gp(struct rcu_data *rdp, bool force) return __wake_nocb_gp(rdp_gp, rdp, force, flags); } +/* + * LAZY_FLUSH_JIFFIES decides the maximum amount of time that + * can elapse before lazy callbacks are flushed. Lazy callbacks + * could be flushed much earlier for a number of other reasons + * however, LAZY_FLUSH_JIFFIES will ensure no lazy callbacks are + * left unsubmitted to RCU after those many jiffies. + */ +#define LAZY_FLUSH_JIFFIES (10 * HZ) +unsigned long jiffies_till_flush = LAZY_FLUSH_JIFFIES; + +#ifdef CONFIG_RCU_LAZY +// To be called only from test code. +void rcu_lazy_set_jiffies_till_flush(unsigned long jif) +{ + jiffies_till_flush = jif; +} +EXPORT_SYMBOL(rcu_lazy_set_jiffies_till_flush); + +unsigned long rcu_lazy_get_jiffies_till_flush(void) +{ + return jiffies_till_flush; +} +EXPORT_SYMBOL(rcu_lazy_get_jiffies_till_flush); +#endif + /* * Arrange to wake the GP kthread for this NOCB group at some future * time when it is safe to do so. @@ -265,6 +290,7 @@ static void wake_nocb_gp_defer(struct rcu_data *rdp, int waketype, { unsigned long flags; struct rcu_data *rdp_gp = rdp->nocb_gp_rdp; + unsigned long mod_jif = 0; raw_spin_lock_irqsave(&rdp_gp->nocb_gp_lock, flags); @@ -272,16 +298,32 @@ static void wake_nocb_gp_defer(struct rcu_data *rdp, int waketype, * Bypass wakeup overrides previous deferments. In case * of callback storm, no need to wake up too early. */ - if (waketype == RCU_NOCB_WAKE_BYPASS) { - mod_timer(&rdp_gp->nocb_timer, jiffies + 2); - WRITE_ONCE(rdp_gp->nocb_defer_wakeup, waketype); - } else { + switch (waketype) { + case RCU_NOCB_WAKE_LAZY: + if (rdp->nocb_defer_wakeup != RCU_NOCB_WAKE_LAZY) + mod_jif = jiffies_till_flush; + break; + + case RCU_NOCB_WAKE_BYPASS: + mod_jif = 2; + break; + + case RCU_NOCB_WAKE: + case RCU_NOCB_WAKE_FORCE: + // If the type of deferred wake is "stronger" + // than it was before, make it wake up the soonest. if (rdp_gp->nocb_defer_wakeup < RCU_NOCB_WAKE) - mod_timer(&rdp_gp->nocb_timer, jiffies + 1); - if (rdp_gp->nocb_defer_wakeup < waketype) - WRITE_ONCE(rdp_gp->nocb_defer_wakeup, waketype); + mod_jif = 1; + break; } + if (mod_jif) + mod_timer(&rdp_gp->nocb_timer, jiffies + mod_jif); + + // If new type of wake up is stronger than before, promote. + if (rdp_gp->nocb_defer_wakeup < waketype) + WRITE_ONCE(rdp_gp->nocb_defer_wakeup, waketype); + raw_spin_unlock_irqrestore(&rdp_gp->nocb_gp_lock, flags); trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, reason); @@ -296,7 +338,7 @@ static void wake_nocb_gp_defer(struct rcu_data *rdp, int waketype, * Note that this function always returns true if rhp is NULL. */ static bool rcu_nocb_do_flush_bypass(struct rcu_data *rdp, struct rcu_head *rhp, - unsigned long j) + unsigned long j, bool lazy) { struct rcu_cblist rcl; @@ -310,7 +352,9 @@ static bool rcu_nocb_do_flush_bypass(struct rcu_data *rdp, struct rcu_head *rhp, /* Note: ->cblist.len already accounts for ->nocb_bypass contents. */ if (rhp) rcu_segcblist_inc_len(&rdp->cblist); /* Must precede enqueue. */ - rcu_cblist_flush_enqueue(&rcl, &rdp->nocb_bypass, rhp); + + /* The lazy CBs are being flushed, but a new one might be enqueued. */ + rcu_cblist_flush_enqueue(&rcl, &rdp->nocb_bypass, rhp, lazy); rcu_segcblist_insert_pend_cbs(&rdp->cblist, &rcl); WRITE_ONCE(rdp->nocb_bypass_first, j); rcu_nocb_bypass_unlock(rdp); @@ -326,13 +370,20 @@ static bool rcu_nocb_do_flush_bypass(struct rcu_data *rdp, struct rcu_head *rhp, * Note that this function always returns true if rhp is NULL. */ static bool rcu_nocb_flush_bypass(struct rcu_data *rdp, struct rcu_head *rhp, - unsigned long j) + unsigned long j, bool lazy, bool wake_gp) { + bool ret; + if (!rcu_rdp_is_offloaded(rdp)) return true; rcu_lockdep_assert_cblist_protected(rdp); rcu_nocb_bypass_lock(rdp); - return rcu_nocb_do_flush_bypass(rdp, rhp, j); + ret = rcu_nocb_do_flush_bypass(rdp, rhp, j, lazy); + + if (wake_gp) + wake_nocb_gp(rdp, true); + + return ret; } /* @@ -345,7 +396,7 @@ static void rcu_nocb_try_flush_bypass(struct rcu_data *rdp, unsigned long j) if (!rcu_rdp_is_offloaded(rdp) || !rcu_nocb_bypass_trylock(rdp)) return; - WARN_ON_ONCE(!rcu_nocb_do_flush_bypass(rdp, NULL, j)); + WARN_ON_ONCE(!rcu_nocb_do_flush_bypass(rdp, NULL, j, false)); } /* @@ -367,12 +418,14 @@ static void rcu_nocb_try_flush_bypass(struct rcu_data *rdp, unsigned long j) * there is only one CPU in operation. */ static bool rcu_nocb_try_bypass(struct rcu_data *rdp, struct rcu_head *rhp, - bool *was_alldone, unsigned long flags) + bool *was_alldone, unsigned long flags, + bool lazy) { unsigned long c; unsigned long cur_gp_seq; unsigned long j = jiffies; long ncbs = rcu_cblist_n_cbs(&rdp->nocb_bypass); + long n_lazy_cbs = rcu_cblist_n_lazy_cbs(&rdp->nocb_bypass); lockdep_assert_irqs_disabled(); @@ -414,30 +467,47 @@ static bool rcu_nocb_try_bypass(struct rcu_data *rdp, struct rcu_head *rhp, } WRITE_ONCE(rdp->nocb_nobypass_count, c); - // If there hasn't yet been all that many ->cblist enqueues - // this jiffy, tell the caller to enqueue onto ->cblist. But flush - // ->nocb_bypass first. - if (rdp->nocb_nobypass_count < nocb_nobypass_lim_per_jiffy) { + // If caller passed a non-lazy CB and there hasn't yet been all that + // many ->cblist enqueues this jiffy, tell the caller to enqueue it + // onto ->cblist. But flush ->nocb_bypass first. Also do so, if total + // number of CBs (lazy + non-lazy) grows too much, or there were lazy + // CBs previously queued and the current one is non-lazy. + // + // Note that if the bypass list has lazy CBs, and the main list is + // empty, and rhp happens to be non-lazy, then we end up flushing all + // the lazy CBs to the main list as well. That's the right thing to do, + // since we are kick-starting RCU GP processing anyway for the non-lazy + // one, we can just reuse that GP for the already queued-up lazy ones. + if ((rdp->nocb_nobypass_count < nocb_nobypass_lim_per_jiffy && !lazy) || + (!lazy && n_lazy_cbs) || + (lazy && n_lazy_cbs >= qhimark)) { rcu_nocb_lock(rdp); - *was_alldone = !rcu_segcblist_pend_cbs(&rdp->cblist); + + // This variable helps decide if a wakeup of the rcuog thread + // is needed. It is passed to __call_rcu_nocb_wake() by the + // caller. If only lazy CBs were previously queued and this one + // is non-lazy, make sure the caller does a wake up. + *was_alldone = !rcu_segcblist_pend_cbs(&rdp->cblist) || + (!lazy && n_lazy_cbs); + if (*was_alldone) trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, - TPS("FirstQ")); - WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, j)); + lazy ? TPS("FirstLazyQ") : TPS("FirstQ")); + WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, j, lazy, false)); WARN_ON_ONCE(rcu_cblist_n_cbs(&rdp->nocb_bypass)); return false; // Caller must enqueue the callback. } // If ->nocb_bypass has been used too long or is too full, // flush ->nocb_bypass to ->cblist. - if ((ncbs && j != READ_ONCE(rdp->nocb_bypass_first)) || - ncbs >= qhimark) { + if ((ncbs && j != READ_ONCE(rdp->nocb_bypass_first)) || ncbs >= qhimark) { rcu_nocb_lock(rdp); - if (!rcu_nocb_flush_bypass(rdp, rhp, j)) { - *was_alldone = !rcu_segcblist_pend_cbs(&rdp->cblist); + if (!rcu_nocb_flush_bypass(rdp, rhp, j, lazy, false)) { + *was_alldone = !rcu_segcblist_pend_cbs(&rdp->cblist) || + (!lazy && n_lazy_cbs); if (*was_alldone) trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, - TPS("FirstQ")); + lazy ? TPS("FirstLazyQ") : TPS("FirstQ")); WARN_ON_ONCE(rcu_cblist_n_cbs(&rdp->nocb_bypass)); return false; // Caller must enqueue the callback. } @@ -455,12 +525,18 @@ static bool rcu_nocb_try_bypass(struct rcu_data *rdp, struct rcu_head *rhp, rcu_nocb_wait_contended(rdp); rcu_nocb_bypass_lock(rdp); ncbs = rcu_cblist_n_cbs(&rdp->nocb_bypass); + n_lazy_cbs = rcu_cblist_n_lazy_cbs(&rdp->nocb_bypass); rcu_segcblist_inc_len(&rdp->cblist); /* Must precede enqueue. */ - rcu_cblist_enqueue(&rdp->nocb_bypass, rhp); + rcu_cblist_enqueue(&rdp->nocb_bypass, rhp, lazy); + if (!ncbs) { WRITE_ONCE(rdp->nocb_bypass_first, j); - trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("FirstBQ")); + trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, + lazy ? TPS("FirstLazyBQ") : TPS("FirstBQ")); + } else if (!n_lazy_cbs && lazy) { + trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("FirstLazyBQ")); } + rcu_nocb_bypass_unlock(rdp); smp_mb(); /* Order enqueue before wake. */ if (ncbs) { @@ -493,7 +569,7 @@ static void __call_rcu_nocb_wake(struct rcu_data *rdp, bool was_alldone, { unsigned long cur_gp_seq; unsigned long j; - long len; + long len, lazy_len, bypass_len; struct task_struct *t; // If we are being polled or there is no kthread, just leave. @@ -506,9 +582,16 @@ static void __call_rcu_nocb_wake(struct rcu_data *rdp, bool was_alldone, } // Need to actually to a wakeup. len = rcu_segcblist_n_cbs(&rdp->cblist); + bypass_len = rcu_cblist_n_cbs(&rdp->nocb_bypass); + lazy_len = rcu_cblist_n_lazy_cbs(&rdp->nocb_bypass); if (was_alldone) { rdp->qlen_last_fqs_check = len; - if (!irqs_disabled_flags(flags)) { + // Only lazy CBs in bypass list + if (lazy_len && bypass_len == lazy_len) { + rcu_nocb_unlock_irqrestore(rdp, flags); + wake_nocb_gp_defer(rdp, RCU_NOCB_WAKE_LAZY, + TPS("WakeLazy")); + } else if (!irqs_disabled_flags(flags)) { /* ... if queue was empty ... */ rcu_nocb_unlock_irqrestore(rdp, flags); wake_nocb_gp(rdp, false); @@ -599,8 +682,8 @@ static inline bool nocb_gp_update_state_deoffloading(struct rcu_data *rdp, */ static void nocb_gp_wait(struct rcu_data *my_rdp) { - bool bypass = false; - long bypass_ncbs; + bool bypass = false, lazy = false; + long bypass_ncbs, lazy_ncbs; int __maybe_unused cpu = my_rdp->cpu; unsigned long cur_gp_seq; unsigned long flags; @@ -636,6 +719,7 @@ static void nocb_gp_wait(struct rcu_data *my_rdp) */ list_for_each_entry_rcu(rdp, &my_rdp->nocb_head_rdp, nocb_entry_rdp, 1) { bool needwake_state = false; + bool flush_bypass = false; if (!nocb_gp_enabled_cb(rdp)) continue; @@ -648,22 +732,37 @@ static void nocb_gp_wait(struct rcu_data *my_rdp) continue; } bypass_ncbs = rcu_cblist_n_cbs(&rdp->nocb_bypass); - if (bypass_ncbs && + lazy_ncbs = rcu_cblist_n_lazy_cbs(&rdp->nocb_bypass); + + if (lazy_ncbs && + (time_after(j, READ_ONCE(rdp->nocb_bypass_first) + jiffies_till_flush) || + bypass_ncbs > 2 * qhimark)) { + flush_bypass = true; + } else if (bypass_ncbs && (lazy_ncbs != bypass_ncbs) && (time_after(j, READ_ONCE(rdp->nocb_bypass_first) + 1) || bypass_ncbs > 2 * qhimark)) { - // Bypass full or old, so flush it. - (void)rcu_nocb_try_flush_bypass(rdp, j); - bypass_ncbs = rcu_cblist_n_cbs(&rdp->nocb_bypass); + flush_bypass = true; } else if (!bypass_ncbs && rcu_segcblist_empty(&rdp->cblist)) { rcu_nocb_unlock_irqrestore(rdp, flags); if (needwake_state) swake_up_one(&rdp->nocb_state_wq); continue; /* No callbacks here, try next. */ } + + if (flush_bypass) { + // Bypass full or old, so flush it. + (void)rcu_nocb_try_flush_bypass(rdp, j); + bypass_ncbs = rcu_cblist_n_cbs(&rdp->nocb_bypass); + lazy_ncbs = rcu_cblist_n_lazy_cbs(&rdp->nocb_bypass); + } + if (bypass_ncbs) { trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, - TPS("Bypass")); - bypass = true; + bypass_ncbs == lazy_ncbs ? TPS("Lazy") : TPS("Bypass")); + if (bypass_ncbs == lazy_ncbs) + lazy = true; + else + bypass = true; } rnp = rdp->mynode; @@ -713,12 +812,21 @@ static void nocb_gp_wait(struct rcu_data *my_rdp) my_rdp->nocb_gp_gp = needwait_gp; my_rdp->nocb_gp_seq = needwait_gp ? wait_gp_seq : 0; - if (bypass && !rcu_nocb_poll) { - // At least one child with non-empty ->nocb_bypass, so set - // timer in order to avoid stranding its callbacks. - wake_nocb_gp_defer(my_rdp, RCU_NOCB_WAKE_BYPASS, - TPS("WakeBypassIsDeferred")); + // At least one child with non-empty ->nocb_bypass, so set + // timer in order to avoid stranding its callbacks. + if (!rcu_nocb_poll) { + // If bypass list only has lazy CBs. Add a deferred + // lazy wake up. + if (lazy && !bypass) { + wake_nocb_gp_defer(my_rdp, RCU_NOCB_WAKE_LAZY, + TPS("WakeLazyIsDeferred")); + // Otherwise add a deferred bypass wake up. + } else if (bypass) { + wake_nocb_gp_defer(my_rdp, RCU_NOCB_WAKE_BYPASS, + TPS("WakeBypassIsDeferred")); + } } + if (rcu_nocb_poll) { /* Polling, so trace if first poll in the series. */ if (gotcbs) @@ -999,7 +1107,7 @@ static long rcu_nocb_rdp_deoffload(void *arg) * return false, which means that future calls to rcu_nocb_try_bypass() * will refuse to put anything into the bypass. */ - WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies)); + WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies, false, false)); /* * Start with invoking rcu_core() early. This way if the current thread * happens to preempt an ongoing call to rcu_core() in the middle, @@ -1500,13 +1608,14 @@ static void rcu_init_one_nocb(struct rcu_node *rnp) } static bool rcu_nocb_flush_bypass(struct rcu_data *rdp, struct rcu_head *rhp, - unsigned long j) + unsigned long j, bool lazy, bool wakegp) { return true; } static bool rcu_nocb_try_bypass(struct rcu_data *rdp, struct rcu_head *rhp, - bool *was_alldone, unsigned long flags) + bool *was_alldone, unsigned long flags, + bool lazy) { return false; } From patchwork Fri Aug 19 20:48:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 12949183 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E57E8C32772 for ; Fri, 19 Aug 2022 20:49:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351448AbiHSUtN (ORCPT ); Fri, 19 Aug 2022 16:49:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48404 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351272AbiHSUtL (ORCPT ); Fri, 19 Aug 2022 16:49:11 -0400 Received: from mail-qt1-x82d.google.com (mail-qt1-x82d.google.com [IPv6:2607:f8b0:4864:20::82d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9CCCA9FA9C for ; Fri, 19 Aug 2022 13:49:06 -0700 (PDT) Received: by mail-qt1-x82d.google.com with SMTP id l5so4226424qtv.4 for ; Fri, 19 Aug 2022 13:49:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=F0DFzzuR1hRvjFCx5f50gxrlxhxkE8RR47wLjamCGKQ=; b=PDRRy8BUg0BVEOmkM3/umMktIdoT0PATsFNsP3S9aQxm0mOktahmyf0W1PnRtMIF1k +hYZJlBjP2QS2bHR9G27hjQNi1b7nWu/8M6bLMaPfR20p+8TqhUwB5kWkI/00EKOkwrs kJv5t5QsRNzz6hP0aShIFbYqZnM6/c7rwDhDk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=F0DFzzuR1hRvjFCx5f50gxrlxhxkE8RR47wLjamCGKQ=; b=uZETfl7CMrPgvW1lvLwFe+4p+IeqOIAAdeUxEQOD5v1nqjhrKAOXCtCMPcKWeYwCGY lpClICuflTJapkLpcqnC0sHxjwG7eWnoaNRXpzojenCed5a20ggR8QDVk6OCB0VAl7MG TYQbELVA7MYa94h7/TTq+y7ak4JudXViUX0MARVj7Dv9mJIreXslvgb7t09XuJvip/w8 2H078XJGdHGUTQ+iXqDNG2vwvutirVnFaMB6/Xzc8D/FvURbgEEAYRd6BjNsktNqnIT9 GU4dgzgtb86tv080cpok5tYnd6jq7maWGGsCmQ/4zwdxv+zYVIFqkKPlIyHYIYNViwaY fNbA== X-Gm-Message-State: ACgBeo2773oU66T0tLyb0T0W4Qv6o71fEx5RK12f7JgNnIseH7xWWMBe hnZSe+Hl7yz5yMhm+Hm48fP1dg== X-Google-Smtp-Source: AA6agR47nKBv/28eI1huj/AljfLxaiQclxiU4+As9WyDKtjPCt5M3ct1MUu0njaW3f0XwqH8n6XDyQ== X-Received: by 2002:ac8:5b03:0:b0:343:679b:64f2 with SMTP id m3-20020ac85b03000000b00343679b64f2mr8250251qtw.260.1660942145760; Fri, 19 Aug 2022 13:49:05 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:05 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: Vineeth Pillai , Joel Fernandes , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu Subject: [PATCH v4 02/14] rcu: shrinker for lazy rcu Date: Fri, 19 Aug 2022 20:48:45 +0000 Message-Id: <20220819204857.3066329-3-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Vineeth Pillai The shrinker is used to speed up the free'ing of memory potentially held by RCU lazy callbacks. RCU kernel module test cases show this to be effective. Test is introduced in a later patch. Signed-off-by: Vineeth Pillai Signed-off-by: Joel Fernandes (Google) --- kernel/rcu/tree_nocb.h | 52 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 52 insertions(+) diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h index 55636da76bc2..edb4e59dbf38 100644 --- a/kernel/rcu/tree_nocb.h +++ b/kernel/rcu/tree_nocb.h @@ -1259,6 +1259,55 @@ int rcu_nocb_cpu_offload(int cpu) } EXPORT_SYMBOL_GPL(rcu_nocb_cpu_offload); +static unsigned long +lazy_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc) +{ + int cpu; + unsigned long count = 0; + + /* Snapshot count of all CPUs */ + for_each_possible_cpu(cpu) { + struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); + + count += rcu_cblist_n_lazy_cbs(&rdp->nocb_bypass); + } + + return count ? count : SHRINK_EMPTY; +} + +static unsigned long +lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) +{ + int cpu; + unsigned long flags; + unsigned long count = 0; + + /* Snapshot count of all CPUs */ + for_each_possible_cpu(cpu) { + struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); + int _count = rcu_cblist_n_lazy_cbs(&rdp->nocb_bypass); + + if (_count == 0) + continue; + rcu_nocb_lock_irqsave(rdp, flags); + rcu_cblist_reset_lazy_len(&rdp->nocb_bypass); + rcu_nocb_unlock_irqrestore(rdp, flags); + wake_nocb_gp(rdp, false); + sc->nr_to_scan -= _count; + count += _count; + if (sc->nr_to_scan <= 0) + break; + } + return count ? count : SHRINK_STOP; +} + +static struct shrinker lazy_rcu_shrinker = { + .count_objects = lazy_rcu_shrink_count, + .scan_objects = lazy_rcu_shrink_scan, + .batch = 0, + .seeks = DEFAULT_SEEKS, +}; + void __init rcu_init_nohz(void) { int cpu; @@ -1296,6 +1345,9 @@ void __init rcu_init_nohz(void) if (!rcu_state.nocb_is_setup) return; + if (register_shrinker(&lazy_rcu_shrinker)) + pr_err("Failed to register lazy_rcu shrinker!\n"); + #if defined(CONFIG_NO_HZ_FULL) if (tick_nohz_full_running) cpumask_or(rcu_nocb_mask, rcu_nocb_mask, tick_nohz_full_mask); From patchwork Fri Aug 19 20:48:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 12949186 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D854C32772 for ; Fri, 19 Aug 2022 20:49:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350558AbiHSUtQ (ORCPT ); Fri, 19 Aug 2022 16:49:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48408 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351305AbiHSUtL (ORCPT ); Fri, 19 Aug 2022 16:49:11 -0400 Received: from mail-qk1-x734.google.com (mail-qk1-x734.google.com [IPv6:2607:f8b0:4864:20::734]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 334A1A0335 for ; Fri, 19 Aug 2022 13:49:07 -0700 (PDT) Received: by mail-qk1-x734.google.com with SMTP id g21so4111297qka.5 for ; Fri, 19 Aug 2022 13:49:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=JSZQFOSju4bKCbYuTUWbkQrUq8c2e+wetnXoNQ/tht0=; b=caHyO3BmP+0RpkPUaYE04/HYeqfdQU2RKMn4DwMgWRu0rJG/nRGkA7j1MqVK/EpFbb MWetztEvdSaiHrkbE47XvgxtC269jGLU6f/T21p4Anh9Y2NO/LbAaJpaZ/IKiYkfSDpE wM5Pb01twdId5UQJU4Fqnah2YU2iVXgl5uccM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=JSZQFOSju4bKCbYuTUWbkQrUq8c2e+wetnXoNQ/tht0=; b=Bcq4O0xUNnYrxx9vKqovjnq7AwZy0f9jnOfMmFWYnNHy9pCDTKjqLLBxEKrRphoB5G yns4M1go9l49+/WUpu7zAVSNx+YMhbowflS8pe1jhbHvq+0zjQCTKU3c5lV5XvBoa3xt 4HibVqxnVcqN5CXUXhqBlX50CwYB2Mz678XIOGguQB/y3iiV1j2pSlQdxelARGnSY6nR T1ijM0MkTQihrlWsZHONP9AFaw9OHKqnP7UasLhyzbhUmE03rCw2RnIaliADJL2J1kkg 7HBR0a+HHl4Kzo4OGfGmvaysXUArfsMsSaR3rLYFJLWYaxA8Kjy9MLiCDdorzrK0CW1f 3iXA== X-Gm-Message-State: ACgBeo04qcc19pXjo7aJ1tXLSKA57jtWk/NjWt+d6MtWir3+2DyeJzjP N0I8CSmOLeilcF9yjvI76BNUUh1PGgba6Q== X-Google-Smtp-Source: AA6agR4z4jgkFfg7uRkEkkb5jX+zLRAP7ehrAQSepSz3xp65zrEriNppnDOFer4fL5iBHOMoaipLQA== X-Received: by 2002:a05:620a:1a13:b0:6b8:bd72:a0b2 with SMTP id bk19-20020a05620a1a1300b006b8bd72a0b2mr6209407qkb.229.1660942146313; Fri, 19 Aug 2022 13:49:06 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:06 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 03/14] rcuscale: Add laziness and kfree tests Date: Fri, 19 Aug 2022 20:48:46 +0000 Message-Id: <20220819204857.3066329-4-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org We aad 2 tests to rcuscale, first one is a startup test to check whether we are not too lazy or too hard working. Two, emulate kfree_rcu() itself to use call_rcu_lazy() and check memory pressure. In my testing, call_rcu_lazy() does well to keep memory pressure under control, similar to kfree_rcu(). Signed-off-by: Joel Fernandes (Google) --- kernel/rcu/rcuscale.c | 74 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 73 insertions(+), 1 deletion(-) diff --git a/kernel/rcu/rcuscale.c b/kernel/rcu/rcuscale.c index 277a5bfb37d4..ed5544227f4d 100644 --- a/kernel/rcu/rcuscale.c +++ b/kernel/rcu/rcuscale.c @@ -95,6 +95,7 @@ torture_param(int, verbose, 1, "Enable verbose debugging printk()s"); torture_param(int, writer_holdoff, 0, "Holdoff (us) between GPs, zero to disable"); torture_param(int, kfree_rcu_test, 0, "Do we run a kfree_rcu() scale test?"); torture_param(int, kfree_mult, 1, "Multiple of kfree_obj size to allocate."); +torture_param(int, kfree_rcu_by_lazy, 0, "Use call_rcu_lazy() to emulate kfree_rcu()?"); static char *scale_type = "rcu"; module_param(scale_type, charp, 0444); @@ -658,6 +659,14 @@ struct kfree_obj { struct rcu_head rh; }; +/* Used if doing RCU-kfree'ing via call_rcu_lazy(). */ +static void kfree_rcu_lazy(struct rcu_head *rh) +{ + struct kfree_obj *obj = container_of(rh, struct kfree_obj, rh); + + kfree(obj); +} + static int kfree_scale_thread(void *arg) { @@ -695,6 +704,11 @@ kfree_scale_thread(void *arg) if (!alloc_ptr) return -ENOMEM; + if (kfree_rcu_by_lazy) { + call_rcu_lazy(&(alloc_ptr->rh), kfree_rcu_lazy); + continue; + } + // By default kfree_rcu_test_single and kfree_rcu_test_double are // initialized to false. If both have the same value (false or true) // both are randomly tested, otherwise only the one with value true @@ -737,6 +751,9 @@ kfree_scale_cleanup(void) { int i; + if (kfree_rcu_by_lazy) + rcu_force_call_rcu_to_lazy(false); + if (torture_cleanup_begin()) return; @@ -766,11 +783,64 @@ kfree_scale_shutdown(void *arg) return -EINVAL; } +// Used if doing RCU-kfree'ing via call_rcu_lazy(). +static unsigned long jiffies_at_lazy_cb; +static struct rcu_head lazy_test1_rh; +static int rcu_lazy_test1_cb_called; +static void call_rcu_lazy_test1(struct rcu_head *rh) +{ + jiffies_at_lazy_cb = jiffies; + WRITE_ONCE(rcu_lazy_test1_cb_called, 1); +} + static int __init kfree_scale_init(void) { long i; int firsterr = 0; + unsigned long orig_jif, jif_start; + + // If lazy-rcu based kfree'ing is requested, then for kernels that + // support it, force all call_rcu() to call_rcu_lazy() so that non-lazy + // CBs do not remove laziness of the lazy ones (since the test tries to + // stress call_rcu_lazy() for OOM). + // + // Also, do a quick self-test to ensure laziness is as much as + // expected. + if (kfree_rcu_by_lazy && !IS_ENABLED(CONFIG_RCU_LAZY)) { + pr_alert("CONFIG_RCU_LAZY is disabled, falling back to kfree_rcu() " + "for delayed RCU kfree'ing\n"); + kfree_rcu_by_lazy = 0; + } + + if (kfree_rcu_by_lazy) { + /* do a test to check the timeout. */ + orig_jif = rcu_lazy_get_jiffies_till_flush(); + + rcu_force_call_rcu_to_lazy(true); + rcu_lazy_set_jiffies_till_flush(2 * HZ); + rcu_barrier(); + + jif_start = jiffies; + jiffies_at_lazy_cb = 0; + call_rcu_lazy(&lazy_test1_rh, call_rcu_lazy_test1); + + smp_cond_load_relaxed(&rcu_lazy_test1_cb_called, VAL == 1); + + rcu_lazy_set_jiffies_till_flush(orig_jif); + + if (WARN_ON_ONCE(jiffies_at_lazy_cb - jif_start < 2 * HZ)) { + pr_alert("ERROR: Lazy CBs are not being lazy as expected!\n"); + WARN_ON_ONCE(1); + return -1; + } + + if (WARN_ON_ONCE(jiffies_at_lazy_cb - jif_start > 3 * HZ)) { + pr_alert("ERROR: Lazy CBs are being too lazy!\n"); + WARN_ON_ONCE(1); + return -1; + } + } kfree_nrealthreads = compute_real(kfree_nthreads); /* Start up the kthreads. */ @@ -783,7 +853,9 @@ kfree_scale_init(void) schedule_timeout_uninterruptible(1); } - pr_alert("kfree object size=%zu\n", kfree_mult * sizeof(struct kfree_obj)); + pr_alert("kfree object size=%zu, kfree_rcu_by_lazy=%d\n", + kfree_mult * sizeof(struct kfree_obj), + kfree_rcu_by_lazy); kfree_reader_tasks = kcalloc(kfree_nrealthreads, sizeof(kfree_reader_tasks[0]), GFP_KERNEL); From patchwork Fri Aug 19 20:48:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 12949185 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAA73C32771 for ; Fri, 19 Aug 2022 20:49:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351272AbiHSUtO (ORCPT ); Fri, 19 Aug 2022 16:49:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350558AbiHSUtM (ORCPT ); Fri, 19 Aug 2022 16:49:12 -0400 Received: from mail-qk1-x72e.google.com (mail-qk1-x72e.google.com [IPv6:2607:f8b0:4864:20::72e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D289AB2D95 for ; Fri, 19 Aug 2022 13:49:07 -0700 (PDT) Received: by mail-qk1-x72e.google.com with SMTP id w18so4100363qki.8 for ; Fri, 19 Aug 2022 13:49:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=yBV3vkCTgPe3Ul1W4jUUwZZjvExObdYpbC/MGzFj//E=; b=ZGz05HeG9rIUBCpyhoDA37RYMUY14Ci8MukMO2MctVw3Xe1BHKtAeXAY/niZLnMPjS kfEHlgOt7DV+W//bFQCWm5EdkDghGTOByuPu2baAjLmzD+cP4LBHlsZImjWVz5uc+rv9 dnTxHzcVSJVgKwx3nMaXnp1JsnLcEBTO949lY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=yBV3vkCTgPe3Ul1W4jUUwZZjvExObdYpbC/MGzFj//E=; b=sEu9tjPDwpj1uSXUFYHPtVoXvNAmBVfebTSiGnHUX6suYnouX0Ut7SdjWNT9mAjElz GBOIj1G8ynr1vyG3YMJrRbpDzyC7Mm6wiMPz+DQUQkCjbL5Te06hVELJqUPSX3dc2BHo HVvvZiZ3Tv8H9O5NoGwnD57g21FTDoivNZFArz8wBDQotS78SH1A/iD3ZG6eniOnxOWm qOKkfuJUPy/i+B0wZw38JudicFeqAgXPmX9ebVL1/GeAiZp75AyomniljYmlaTcxNmA1 T3yNMh4woZ1X9TKsC58ZJj1hNpL1xiqoJxQga77wnjIm+GJiMyfXPpdcxqkcqpCS7/qi rIqQ== X-Gm-Message-State: ACgBeo0NIYxOqziJURAOIV+jB/5mtOBiE19GHETSpDGYlgwqTkrR7llZ 5CH/HY6rPJx7VZFb/PfWM/8gyQ== X-Google-Smtp-Source: AA6agR5ZBW9PkeFmvrVxxY7HxjpUWeF/w6JpG0gJ82bEIn1XL0g30W+QVZyeOAN1ZVNeLdLYknEEgg== X-Received: by 2002:a05:620a:25c8:b0:6ae:ba71:ea7d with SMTP id y8-20020a05620a25c800b006aeba71ea7dmr6443457qko.547.1660942146948; Fri, 19 Aug 2022 13:49:06 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:06 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 04/14] fs: Move call_rcu() to call_rcu_lazy() in some paths Date: Fri, 19 Aug 2022 20:48:47 +0000 Message-Id: <20220819204857.3066329-5-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This is required to prevent callbacks triggering RCU machinery too quickly and too often, which adds more power to the system. When testing, we found that these paths were invoked often when the system is not doing anything (screen is ON but otherwise idle). Signed-off-by: Joel Fernandes (Google) --- fs/dcache.c | 4 ++-- fs/eventpoll.c | 2 +- fs/file_table.c | 2 +- fs/inode.c | 2 +- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/fs/dcache.c b/fs/dcache.c index 93f4f5ee07bf..7f51bac390c8 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -366,7 +366,7 @@ static void dentry_free(struct dentry *dentry) if (unlikely(dname_external(dentry))) { struct external_name *p = external_name(dentry); if (likely(atomic_dec_and_test(&p->u.count))) { - call_rcu(&dentry->d_u.d_rcu, __d_free_external); + call_rcu_lazy(&dentry->d_u.d_rcu, __d_free_external); return; } } @@ -374,7 +374,7 @@ static void dentry_free(struct dentry *dentry) if (dentry->d_flags & DCACHE_NORCU) __d_free(&dentry->d_u.d_rcu); else - call_rcu(&dentry->d_u.d_rcu, __d_free); + call_rcu_lazy(&dentry->d_u.d_rcu, __d_free); } /* diff --git a/fs/eventpoll.c b/fs/eventpoll.c index 971f98af48ff..57b3f781760c 100644 --- a/fs/eventpoll.c +++ b/fs/eventpoll.c @@ -729,7 +729,7 @@ static int ep_remove(struct eventpoll *ep, struct epitem *epi) * ep->mtx. The rcu read side, reverse_path_check_proc(), does not make * use of the rbn field. */ - call_rcu(&epi->rcu, epi_rcu_free); + call_rcu_lazy(&epi->rcu, epi_rcu_free); percpu_counter_dec(&ep->user->epoll_watches); diff --git a/fs/file_table.c b/fs/file_table.c index 5424e3a8df5f..417f57e9cb30 100644 --- a/fs/file_table.c +++ b/fs/file_table.c @@ -56,7 +56,7 @@ static inline void file_free(struct file *f) security_file_free(f); if (!(f->f_mode & FMODE_NOACCOUNT)) percpu_counter_dec(&nr_files); - call_rcu(&f->f_u.fu_rcuhead, file_free_rcu); + call_rcu_lazy(&f->f_u.fu_rcuhead, file_free_rcu); } /* diff --git a/fs/inode.c b/fs/inode.c index bd4da9c5207e..38fe040ddbd6 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -312,7 +312,7 @@ static void destroy_inode(struct inode *inode) return; } inode->free_inode = ops->free_inode; - call_rcu(&inode->i_rcu, i_callback); + call_rcu_lazy(&inode->i_rcu, i_callback); } /** From patchwork Fri Aug 19 20:48:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 12949187 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C802C32793 for ; Fri, 19 Aug 2022 20:49:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351305AbiHSUtR (ORCPT ); Fri, 19 Aug 2022 16:49:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351309AbiHSUtM (ORCPT ); Fri, 19 Aug 2022 16:49:12 -0400 Received: from mail-qv1-xf2c.google.com (mail-qv1-xf2c.google.com [IPv6:2607:f8b0:4864:20::f2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A2AC89E2F7 for ; Fri, 19 Aug 2022 13:49:08 -0700 (PDT) Received: by mail-qv1-xf2c.google.com with SMTP id p4so4202019qvr.5 for ; Fri, 19 Aug 2022 13:49:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=9DL/cHJo5H0hKH4rMWH0ykvZgellvC/KgQpRDuKPDSk=; b=l4RVSAofkBcfMT478Bx4AOsqIhKeWXLa0QDRhXlpMVVGXsHfsbVP7KPQvSUkHSxcqI PC2mXE8lPXsYIt7u9qUAT61snSdKEVMOsKDEq0ntvqKd+j+bQQke+9ggJTN6mud5E3SJ zVY+x3UT+mZyYeyb1RERlIxWzUYezNs6sZKyc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=9DL/cHJo5H0hKH4rMWH0ykvZgellvC/KgQpRDuKPDSk=; b=P8o/Yp7yD0Soz4i05GGoA6imlI1xSIW+u84B4HWLHLx3aM8zPWNFPCKBWGwWZVjBK3 2RnKPmUrpuEJ+JxEPUi5vmlhIxrr09sGBr9kpVftkP0ExzXz0UZf4M6x5EQ3qqz6gVn+ Zv4rAarbaHEqAD3lVtMwyULPNjbctYHOVBlcDm2FLNEW1oEPGIvoc84MZYUl+Z85rKqO ubX4b4oxCe5PwV2jK6Lu7x5Lgw8EN+rc1xqUGfnsbj86KPXA9Csbd1pJmAJgR3STkzJS RY4LyD0LvFYQH0QPzEK+Q+DuGxk5r13t6YhcoG8QO+Iql02CEnXgsCzAfKj/QOF/qYY8 uAKw== X-Gm-Message-State: ACgBeo0m6b/lPTvMS1OGiT3A+gg9NvH0raRrJ/PcAt32EvnHN4Tfbkv8 uNCqLLAf1O6BvWrdmVAmw+OOAg== X-Google-Smtp-Source: AA6agR58SuILA8j7MENBOHD4KAqwfTvSkpMvgUjabkpCDtGiAYTUvnrUZaW4E2T+C4VmDACOt/9F0Q== X-Received: by 2002:a05:6214:3009:b0:482:5a89:c09b with SMTP id ke9-20020a056214300900b004825a89c09bmr8046360qvb.71.1660942147663; Fri, 19 Aug 2022 13:49:07 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:07 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 05/14] rcutorture: Add test code for call_rcu_lazy() Date: Fri, 19 Aug 2022 20:48:48 +0000 Message-Id: <20220819204857.3066329-6-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org We add a new RCU type to test call_rcu_lazy(). This allows us to just override the '.call' callback. To compensate for the laziness, we force the laziness to a small number of jiffies. The idea of this test is to stress the new code paths for stability and ensure it at least is providing behavior in parity with, or similar to, call_rcu(). The actual check for amount of laziness is in another test (rcuscale). Signed-off-by: Joel Fernandes (Google) --- kernel/rcu/rcu.h | 1 + kernel/rcu/rcutorture.c | 60 ++++++++++++++++++- kernel/rcu/tree.c | 1 + .../selftests/rcutorture/configs/rcu/CFLIST | 1 + .../selftests/rcutorture/configs/rcu/TREE11 | 18 ++++++ .../rcutorture/configs/rcu/TREE11.boot | 8 +++ 6 files changed, 88 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/rcutorture/configs/rcu/TREE11 create mode 100644 tools/testing/selftests/rcutorture/configs/rcu/TREE11.boot diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h index 608f6ab76c7f..aa3243e49506 100644 --- a/kernel/rcu/rcu.h +++ b/kernel/rcu/rcu.h @@ -460,6 +460,7 @@ enum rcutorture_type { RCU_TASKS_TRACING_FLAVOR, RCU_TRIVIAL_FLAVOR, SRCU_FLAVOR, + RCU_LAZY_FLAVOR, INVALID_RCU_FLAVOR }; diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index 7120165a9342..c52cc4c064f9 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -872,6 +872,64 @@ static struct rcu_torture_ops tasks_rude_ops = { #endif // #else #ifdef CONFIG_TASKS_RUDE_RCU +#ifdef CONFIG_RCU_LAZY + +/* + * Definitions for lazy RCU torture testing. + */ +static unsigned long orig_jiffies_till_flush; + +static void rcu_sync_torture_init_lazy(void) +{ + rcu_sync_torture_init(); + + orig_jiffies_till_flush = rcu_lazy_get_jiffies_till_flush(); + rcu_lazy_set_jiffies_till_flush(50); +} + +static void rcu_lazy_cleanup(void) +{ + rcu_lazy_set_jiffies_till_flush(orig_jiffies_till_flush); +} + +static struct rcu_torture_ops rcu_lazy_ops = { + .ttype = RCU_LAZY_FLAVOR, + .init = rcu_sync_torture_init_lazy, + .cleanup = rcu_lazy_cleanup, + .readlock = rcu_torture_read_lock, + .read_delay = rcu_read_delay, + .readunlock = rcu_torture_read_unlock, + .readlock_held = torture_readlock_not_held, + .get_gp_seq = rcu_get_gp_seq, + .gp_diff = rcu_seq_diff, + .deferred_free = rcu_torture_deferred_free, + .sync = synchronize_rcu, + .exp_sync = synchronize_rcu_expedited, + .get_gp_state = get_state_synchronize_rcu, + .start_gp_poll = start_poll_synchronize_rcu, + .poll_gp_state = poll_state_synchronize_rcu, + .cond_sync = cond_synchronize_rcu, + .call = call_rcu_lazy, + .cb_barrier = rcu_barrier, + .fqs = rcu_force_quiescent_state, + .stats = NULL, + .gp_kthread_dbg = show_rcu_gp_kthreads, + .check_boost_failed = rcu_check_boost_fail, + .stall_dur = rcu_jiffies_till_stall_check, + .irq_capable = 1, + .can_boost = IS_ENABLED(CONFIG_RCU_BOOST), + .extendables = RCUTORTURE_MAX_EXTEND, + .name = "rcu_lazy" +}; + +#define LAZY_OPS &rcu_lazy_ops, + +#else // #ifdef CONFIG_RCU_LAZY + +#define LAZY_OPS + +#endif // #else #ifdef CONFIG_RCU_LAZY + #ifdef CONFIG_TASKS_TRACE_RCU @@ -3145,7 +3203,7 @@ rcu_torture_init(void) unsigned long gp_seq = 0; static struct rcu_torture_ops *torture_ops[] = { &rcu_ops, &rcu_busted_ops, &srcu_ops, &srcud_ops, &busted_srcud_ops, - TASKS_OPS TASKS_RUDE_OPS TASKS_TRACING_OPS + TASKS_OPS TASKS_RUDE_OPS TASKS_TRACING_OPS LAZY_OPS &trivial_ops, }; diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index e76fef8031be..67026382dc21 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -600,6 +600,7 @@ void rcutorture_get_gp_data(enum rcutorture_type test_type, int *flags, { switch (test_type) { case RCU_FLAVOR: + case RCU_LAZY_FLAVOR: *flags = READ_ONCE(rcu_state.gp_flags); *gp_seq = rcu_seq_current(&rcu_state.gp_seq); break; diff --git a/tools/testing/selftests/rcutorture/configs/rcu/CFLIST b/tools/testing/selftests/rcutorture/configs/rcu/CFLIST index 98b6175e5aa0..609c3370616f 100644 --- a/tools/testing/selftests/rcutorture/configs/rcu/CFLIST +++ b/tools/testing/selftests/rcutorture/configs/rcu/CFLIST @@ -5,6 +5,7 @@ TREE04 TREE05 TREE07 TREE09 +TREE11 SRCU-N SRCU-P SRCU-T diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TREE11 b/tools/testing/selftests/rcutorture/configs/rcu/TREE11 new file mode 100644 index 000000000000..436013f3e015 --- /dev/null +++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE11 @@ -0,0 +1,18 @@ +CONFIG_SMP=y +CONFIG_PREEMPT_NONE=n +CONFIG_PREEMPT_VOLUNTARY=n +CONFIG_PREEMPT=y +#CHECK#CONFIG_PREEMPT_RCU=y +CONFIG_HZ_PERIODIC=n +CONFIG_NO_HZ_IDLE=y +CONFIG_NO_HZ_FULL=n +CONFIG_RCU_TRACE=y +CONFIG_HOTPLUG_CPU=y +CONFIG_MAXSMP=y +CONFIG_CPUMASK_OFFSTACK=y +CONFIG_RCU_NOCB_CPU=y +CONFIG_DEBUG_LOCK_ALLOC=n +CONFIG_RCU_BOOST=n +CONFIG_DEBUG_OBJECTS_RCU_HEAD=n +CONFIG_RCU_EXPERT=y +CONFIG_RCU_LAZY=y diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TREE11.boot b/tools/testing/selftests/rcutorture/configs/rcu/TREE11.boot new file mode 100644 index 000000000000..9b6f720d4ccd --- /dev/null +++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE11.boot @@ -0,0 +1,8 @@ +maxcpus=8 nr_cpus=43 +rcutree.gp_preinit_delay=3 +rcutree.gp_init_delay=3 +rcutree.gp_cleanup_delay=3 +rcu_nocbs=0-7 +rcutorture.torture_type=rcu_lazy +rcutorture.nocbs_nthreads=8 +rcutorture.fwd_progress=0 From patchwork Fri Aug 19 20:48:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 12949188 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C818C32771 for ; Fri, 19 Aug 2022 20:49:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351598AbiHSUtS (ORCPT ); Fri, 19 Aug 2022 16:49:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48484 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351437AbiHSUtN (ORCPT ); Fri, 19 Aug 2022 16:49:13 -0400 Received: from mail-qk1-x72b.google.com (mail-qk1-x72b.google.com [IPv6:2607:f8b0:4864:20::72b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E603B5A5D for ; Fri, 19 Aug 2022 13:49:09 -0700 (PDT) Received: by mail-qk1-x72b.google.com with SMTP id j6so4095082qkl.10 for ; Fri, 19 Aug 2022 13:49:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=lNcAKp5cSys1fGk6soAVHXz41xwD4aYJNt/yj8PSxMI=; b=Z5NOz5b17X7HfkZzJOmU+TrGq9k9v7VqVqcjqBeqEHaupK1YDx6isupnlPQZnlKJoi skTSm0cZbcoldx2512oPzJfb96Sm6Fkf1RxlhKRuXxSIirY3obUi/9OvTDhEwlG2rkKx olPINJzLr6/w6LKuFHLL3aqhVPRvriUt+oUZg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=lNcAKp5cSys1fGk6soAVHXz41xwD4aYJNt/yj8PSxMI=; b=WGBAjol1g/XMjSp6KVrGrbi5T2tXnpryfPUNKhK1glpPVtqO/G0QzZtrMdwXHUHmaq iB169UqAAY2Jtk+An7pzzRFlW3Dmv9Wj1jcS18qW+IeP9t2b+MYxgX466I0zNZysOwa3 t99C/1hHg7J+rsmmIMY1exnd0J0t6w6hrX35wT4DMoXdb1AZ9lnkfFQbM+Jv7MnIynU6 5FapN2LBVpmcZpj9Whj8I4Hkx2PpuFR+7YCIf8psDVghTQxs0StGpHU0zxyMcSVE2mtq xOG/zjMtgeif1ggH5SCwhZeWXUde1IxSZD5pDEGQ3nDrlROhKfSKtbxHTpiXgoHvAzIo mIsA== X-Gm-Message-State: ACgBeo3c8xsc9IEnVbYsNox1m27yXzKk+x56XWueUPcASOSzPnTvqEWR wHutcf25UxmfxoQt9qw8/4JM5Q== X-Google-Smtp-Source: AA6agR4Y+4r+MWffKODfxjuKe49rH8feKDIxdpzllu1ok0KwILGkckGGq/CDgdkru5b5f1OB/tBAUg== X-Received: by 2002:a05:620a:290d:b0:6b5:cecc:1cab with SMTP id m13-20020a05620a290d00b006b5cecc1cabmr6185949qkp.465.1660942148400; Fri, 19 Aug 2022 13:49:08 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:07 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 06/14] debug: Toggle lazy at runtime and change flush jiffies Date: Fri, 19 Aug 2022 20:48:49 +0000 Message-Id: <20220819204857.3066329-7-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Enable/Disable this feature by writing 1 or 0 /proc/sys/vm/rcu_lazy. Change value of /proc/sys/vm/rcu_lazy_jiffies to change max duration before flush. Do not merge, only for debug for reviewers. Signed-off-by: Joel Fernandes (Google) --- include/linux/sched/sysctl.h | 3 +++ kernel/rcu/tree_nocb.h | 9 +++++++++ kernel/sysctl.c | 17 +++++++++++++++++ 3 files changed, 29 insertions(+) diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h index 2cd928f15df6..54610f9cd962 100644 --- a/include/linux/sched/sysctl.h +++ b/include/linux/sched/sysctl.h @@ -14,6 +14,9 @@ extern unsigned long sysctl_hung_task_timeout_secs; enum { sysctl_hung_task_timeout_secs = 0 }; #endif +extern unsigned int sysctl_rcu_lazy; +extern unsigned int sysctl_rcu_lazy_jiffies; + enum sched_tunable_scaling { SCHED_TUNABLESCALING_NONE, SCHED_TUNABLESCALING_LOG, diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h index edb4e59dbf38..16621b32de46 100644 --- a/kernel/rcu/tree_nocb.h +++ b/kernel/rcu/tree_nocb.h @@ -266,6 +266,9 @@ static bool wake_nocb_gp(struct rcu_data *rdp, bool force) #define LAZY_FLUSH_JIFFIES (10 * HZ) unsigned long jiffies_till_flush = LAZY_FLUSH_JIFFIES; +unsigned int sysctl_rcu_lazy_jiffies = LAZY_FLUSH_JIFFIES; +unsigned int sysctl_rcu_lazy = 1; + #ifdef CONFIG_RCU_LAZY // To be called only from test code. void rcu_lazy_set_jiffies_till_flush(unsigned long jif) @@ -292,6 +295,9 @@ static void wake_nocb_gp_defer(struct rcu_data *rdp, int waketype, struct rcu_data *rdp_gp = rdp->nocb_gp_rdp; unsigned long mod_jif = 0; + /* debug: not for merge */ + rcu_lazy_set_jiffies_till_flush(sysctl_rcu_lazy_jiffies); + raw_spin_lock_irqsave(&rdp_gp->nocb_gp_lock, flags); /* @@ -697,6 +703,9 @@ static void nocb_gp_wait(struct rcu_data *my_rdp) unsigned long wait_gp_seq = 0; // Suppress "use uninitialized" warning. bool wasempty = false; + /* debug: not for merge */ + rcu_lazy_set_jiffies_till_flush(sysctl_rcu_lazy_jiffies); + /* * Each pass through the following loop checks for CBs and for the * nearest grace period (if any) to wait for next. The CB kthreads diff --git a/kernel/sysctl.c b/kernel/sysctl.c index b00f92df0af5..bbe25d635dc0 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -2450,6 +2450,23 @@ static struct ctl_table vm_table[] = { .extra2 = SYSCTL_ONE, }, #endif +#ifdef CONFIG_RCU_LAZY + { + .procname = "rcu_lazy", + .data = &sysctl_rcu_lazy, + .maxlen = sizeof(unsigned int), + .mode = 0644, + .proc_handler = proc_dointvec, + }, + { + .procname = "rcu_lazy_jiffies", + .data = &sysctl_rcu_lazy_jiffies, + .maxlen = sizeof(unsigned int), + .mode = 0644, + .proc_handler = proc_dointvec, + }, +#endif + { } }; From patchwork Fri Aug 19 20:48:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 12949189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 139D7C3F6B0 for ; Fri, 19 Aug 2022 20:49:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351437AbiHSUtT (ORCPT ); Fri, 19 Aug 2022 16:49:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48504 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351479AbiHSUtN (ORCPT ); Fri, 19 Aug 2022 16:49:13 -0400 Received: from mail-qv1-xf2d.google.com (mail-qv1-xf2d.google.com [IPv6:2607:f8b0:4864:20::f2d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F2ECBB72A9 for ; Fri, 19 Aug 2022 13:49:09 -0700 (PDT) Received: by mail-qv1-xf2d.google.com with SMTP id m17so4196100qvv.7 for ; Fri, 19 Aug 2022 13:49:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=QdCIKbuW1jeO5q/TYvf2HOrVzS0Qp3Smb7xunCfM3lw=; b=gZWOZnoLCqwLKrKaGSGgINgLYm0kJJl2I1xZ7YC9qS5ho5NKETZldVKc2T7wzBc/kp yd6ok2kyEYDIjRHEjDjt15Fv74oPiGbRhRQs9i2uwCc3ht75pNtI4f8Oq7ZlJwX6zRgN DPkRRkCeMhK6O9SpU8FGtY0iTjVskeu3BkgOs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=QdCIKbuW1jeO5q/TYvf2HOrVzS0Qp3Smb7xunCfM3lw=; b=z8wg6eDMijuwNf0SrXDF6VW6Xze12ifIzVDDmo1+WRUSrdSolvajZQFaLsAmGLcG/b sZt4pljJ2HxB+TrohCZmqMV4UzJoSXGA7X0vw3pWkZFNZqzJ8MSHl50IqmdCUDSR3GLd XcbJNl26tTDzvgf4atR0EcTnK+mHXq3zuanFtgWbwdeYcpFLOEftoYV6GwW81NiVJyFh /QvpKG7eYEPrKtRcipjM5bKFQR4xKI7eCc975qW7IC5aO7q9/zDBQBfB43LusWcEdCxj n+HEt+piyF1NHdiR2WT7i4H4UVD07uxmUkjBsad/hu467+9m8c09ieNLvE1NLeCgFxQD t/xg== X-Gm-Message-State: ACgBeo3OICY2SLyE172cg0e/Qt7yRv/To7GW9o19TAAx1kMKweu9IPHM Q/8dY9AxqZeANt4YbsBaev6xZQ== X-Google-Smtp-Source: AA6agR58m90yo1MdsoByPiyi9pH1L0gd07IBChVP/o41vOm26QOn/tNeRaWYQBthPaSJibA3a0skDA== X-Received: by 2002:a05:6214:411e:b0:476:858d:b2c3 with SMTP id kc30-20020a056214411e00b00476858db2c3mr8024858qvb.22.1660942149084; Fri, 19 Aug 2022 13:49:09 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:08 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 07/14] cred: Move call_rcu() to call_rcu_lazy() Date: Fri, 19 Aug 2022 20:48:50 +0000 Message-Id: <20220819204857.3066329-8-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This is required to prevent callbacks triggering RCU machinery too quickly and too often, which adds more power to the system. Signed-off-by: Joel Fernandes (Google) --- kernel/cred.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/cred.c b/kernel/cred.c index e10c15f51c1f..c7cb2e3ac73a 100644 --- a/kernel/cred.c +++ b/kernel/cred.c @@ -150,7 +150,7 @@ void __put_cred(struct cred *cred) if (cred->non_rcu) put_cred_rcu(&cred->rcu); else - call_rcu(&cred->rcu, put_cred_rcu); + call_rcu_lazy(&cred->rcu, put_cred_rcu); } EXPORT_SYMBOL(__put_cred); From patchwork Fri Aug 19 20:48:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 12949191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4267DC32792 for ; Fri, 19 Aug 2022 20:49:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351756AbiHSUtW (ORCPT ); Fri, 19 Aug 2022 16:49:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48668 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351547AbiHSUtR (ORCPT ); Fri, 19 Aug 2022 16:49:17 -0400 Received: from mail-qv1-xf36.google.com (mail-qv1-xf36.google.com [IPv6:2607:f8b0:4864:20::f36]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CDEBCB7EC5 for ; Fri, 19 Aug 2022 13:49:10 -0700 (PDT) Received: by mail-qv1-xf36.google.com with SMTP id j1so4188686qvv.8 for ; Fri, 19 Aug 2022 13:49:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=426wMCJsiGTV85lu42+JZPwXusrFJ9i/3rufNlqDqLk=; b=Zf0eIAQmZsTmWin33rp1mlGvqcz+m2sPSmH3oNADGIUsHoKnVEiH9xF+s1jb97xCiO J0knpuEdwimm+Im9nIVGP8G1P07+5I5eQ4bVNmf31TVrIf/xuxXXdRoMQb4VbNXObzhg F0hJq6xJ6mrwwj1riPdTvVJnS5dP5eb/go24s= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=426wMCJsiGTV85lu42+JZPwXusrFJ9i/3rufNlqDqLk=; b=gr9WKu7xjydAsNxvEt7UsJ5uL0QaD6JSs5Yos27PeVFxStg7IshIAElZfo8/fv38PB RFQIa0UMrvOZWl0WLDIae+0CiJzT7uEp6lQ1YKidAldOHVBuScw2X7vFqKm6L2sqigTp sco+6ULq2h94b61pyoEp7QG8JBaBYZwakQnVZEHFW+CtzPKxHQYWV2a9ixSUEPYnh1jY uN1OrPaIxJ1g6dqBz9DrbGvn84vkazRpxXj5T6n17MwCCiFHQZrxncMclH2eYzUWVgeS joAhV0p60Ocg2/0RVuZhKgpjtFnhZ/X7cDuRg2pfqmU+rMAgw7dOdWQ4YV0JX5SWXTRy xhIg== X-Gm-Message-State: ACgBeo12JOOteEOWW9LlIdctHtCQkdBwQUts6OImT4oYPxU4UIiI8NpR VAgVqC86pBA18LCmBBPkBX04rw== X-Google-Smtp-Source: AA6agR7qo/QCjvB0s7nlcW0al3t3YGKIAXKBT2yMiuCoFuRaib2mPR2/P0JockuC08afh9Yxq6314A== X-Received: by 2002:a0c:8ecc:0:b0:473:2fa4:df7c with SMTP id y12-20020a0c8ecc000000b004732fa4df7cmr7842135qvb.55.1660942149873; Fri, 19 Aug 2022 13:49:09 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:09 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 08/14] security: Move call_rcu() to call_rcu_lazy() Date: Fri, 19 Aug 2022 20:48:51 +0000 Message-Id: <20220819204857.3066329-9-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This is required to prevent callbacks triggering RCU machinery too quickly and too often, which adds more power to the system. Signed-off-by: Joel Fernandes (Google) --- security/security.c | 2 +- security/selinux/avc.c | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/security/security.c b/security/security.c index ea7163c20751..d76f4951b2bd 100644 --- a/security/security.c +++ b/security/security.c @@ -1053,7 +1053,7 @@ void security_inode_free(struct inode *inode) * The inode will be freed after the RCU grace period too. */ if (inode->i_security) - call_rcu((struct rcu_head *)inode->i_security, + call_rcu_lazy((struct rcu_head *)inode->i_security, inode_free_by_rcu); } diff --git a/security/selinux/avc.c b/security/selinux/avc.c index 9a43af0ebd7d..381f046d820f 100644 --- a/security/selinux/avc.c +++ b/security/selinux/avc.c @@ -442,7 +442,7 @@ static void avc_node_free(struct rcu_head *rhead) static void avc_node_delete(struct selinux_avc *avc, struct avc_node *node) { hlist_del_rcu(&node->list); - call_rcu(&node->rhead, avc_node_free); + call_rcu_lazy(&node->rhead, avc_node_free); atomic_dec(&avc->avc_cache.active_nodes); } @@ -458,7 +458,7 @@ static void avc_node_replace(struct selinux_avc *avc, struct avc_node *new, struct avc_node *old) { hlist_replace_rcu(&old->list, &new->list); - call_rcu(&old->rhead, avc_node_free); + call_rcu_lazy(&old->rhead, avc_node_free); atomic_dec(&avc->avc_cache.active_nodes); } From patchwork Fri Aug 19 20:48:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 12949190 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA905C32771 for ; Fri, 19 Aug 2022 20:49:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351479AbiHSUtU (ORCPT ); Fri, 19 Aug 2022 16:49:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48696 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351587AbiHSUtS (ORCPT ); Fri, 19 Aug 2022 16:49:18 -0400 Received: from mail-qk1-x72d.google.com (mail-qk1-x72d.google.com [IPv6:2607:f8b0:4864:20::72d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 87F9EB8A59 for ; Fri, 19 Aug 2022 13:49:11 -0700 (PDT) Received: by mail-qk1-x72d.google.com with SMTP id h27so4098842qkk.9 for ; Fri, 19 Aug 2022 13:49:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=CUKjiZgOClRq235gPyXEiSWcYvWtXl40+K+H8hE5PTY=; b=ZeNq/S53z993Qm1b3qc1CtoWTVw+6FELWTENN3rke55aGVFwfejU8VrnxPEcZF3RCM QmXAwQ36ohAoS0U7oI14itD54h4q8Lq8EnZ73/RXRv78aP/ZrgsbHmw1fYxSBuzlD8dv kiDcrmHTDIax2p0zBZyqFl8pJrq5+RBJA2PIE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=CUKjiZgOClRq235gPyXEiSWcYvWtXl40+K+H8hE5PTY=; b=hJAXODsYXJ5dqev5Trcniatw/rVCnwUeVOaFOztibyc4pHECmtKmX3ktQwGVJ8Pn6G OfWBfoIS37d6n/4z7Iddd5KrBk+8OI3Pz2ydsHeLz7g5QehfOTDVZvPrT7BJsY4RPoSn qjRc7h6LCA1DtTenfEXtzJRbh6LbDSbIA8ctCNCS90pK7X+wYErpRbrL7+qijg+N39NR cH1VBeeIEHKeOFu65sIawUHPqllf8wnn6VsVdMH6TJ70B6CmR/HdKnaBcWhifYLc5DUB glapwk7Ew4RUVEuyKy0ZtMWepTsKNFsNPm368yd2RBnTPhoDOZdb96kq5EpisfgLBYbH HQnA== X-Gm-Message-State: ACgBeo0qwjCHJxCBRZDaSqwfNVj5gW/S+F0IPCnpKAKfG5GIaEPtJGs5 XttkxYICryYSIfWJtbYup/3Awg== X-Google-Smtp-Source: AA6agR760kQ1F9HXuvtToQwm97uNKPe+/wm1Rvx67YbqcFXibwH1PNZkV0oM7bJ1+sHzaI6VoDxYlA== X-Received: by 2002:a05:620a:1243:b0:6bb:daa4:88ea with SMTP id a3-20020a05620a124300b006bbdaa488eamr2431047qkl.628.1660942150634; Fri, 19 Aug 2022 13:49:10 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:10 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 09/14] net/core: Move call_rcu() to call_rcu_lazy() Date: Fri, 19 Aug 2022 20:48:52 +0000 Message-Id: <20220819204857.3066329-10-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This is required to prevent callbacks triggering RCU machinery too quickly and too often, which adds more power to the system. Signed-off-by: Joel Fernandes (Google) --- net/core/dst.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/core/dst.c b/net/core/dst.c index d16c2c9bfebd..68c240a4a0d7 100644 --- a/net/core/dst.c +++ b/net/core/dst.c @@ -174,7 +174,7 @@ void dst_release(struct dst_entry *dst) net_warn_ratelimited("%s: dst:%p refcnt:%d\n", __func__, dst, newrefcnt); if (!newrefcnt) - call_rcu(&dst->rcu_head, dst_destroy_rcu); + call_rcu_lazy(&dst->rcu_head, dst_destroy_rcu); } } EXPORT_SYMBOL(dst_release); From patchwork Fri Aug 19 20:48:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 12949193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E603C32771 for ; Fri, 19 Aug 2022 20:49:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351848AbiHSUth (ORCPT ); Fri, 19 Aug 2022 16:49:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351614AbiHSUtS (ORCPT ); Fri, 19 Aug 2022 16:49:18 -0400 Received: from mail-qv1-xf35.google.com (mail-qv1-xf35.google.com [IPv6:2607:f8b0:4864:20::f35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 803819FA9C for ; Fri, 19 Aug 2022 13:49:12 -0700 (PDT) Received: by mail-qv1-xf35.google.com with SMTP id y4so4203174qvp.3 for ; Fri, 19 Aug 2022 13:49:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=IM3xuvgYuRBTMeRN6UX20iY+v3dQGfWqdrHzqRgQ3nc=; b=j1zt/vmPhAeHqF3WzjQbwEdocA4ht6wnFFga1N4nnE/ogTiDii9fSTlX1cbj76ACKx rkDnBXZzmErhi7X3m2eBFCzCzDmUBL5ffMMdYDxeijnHDwVr3uO7inraAuHPZ94DWzIh S6PULMcp5wfkjvgZ5K6KG090NhD3f4pk7La+4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=IM3xuvgYuRBTMeRN6UX20iY+v3dQGfWqdrHzqRgQ3nc=; b=fOBMGDIsbWDs4zS42OdB6cLOOImO4xVj94AbBegvFA9Rvo8Vz+pp7pVYG0DC4gxwkt qdHtm5KddNkDQsSmpok3OaiUMHx1DEkUj9cHBXUhOE+nl2RI/c5/VjGFIlXwkFJXjQtd KS36BucFs3gPrQAtoNYdiEgh8Yti1W2n9z4eS2tfIRJsUJLBs9LY+zPxfu+V9ennKC5Q r26nO4j/IcjYaszdwTXh/AV0xos/Z59UXdmYrVPOsumyQeoTk+Cv8qpl5JODgQODUKLw gRedUGil3FoVmdn8poiqfoQqthbDlOPYCoL1UKYqvDXjRg/jZnlVO1NdH2K1asGeoouE jQQg== X-Gm-Message-State: ACgBeo2oWE2Co8/v5Zw0Gk2FYNQiONWih3VZ3AqD7qkrc2hg+YhC9zPG 0m4PL/8GwVZ+NvAJPtucLJUCvg== X-Google-Smtp-Source: AA6agR78xm+acrn/3AsCNGALGerD8M0khl3rhGk2dAyVszt75ahBd55xnbLLHZC62JHF+LYJ9HIWfA== X-Received: by 2002:ad4:5cae:0:b0:496:a988:ddc0 with SMTP id q14-20020ad45cae000000b00496a988ddc0mr7940572qvh.3.1660942151448; Fri, 19 Aug 2022 13:49:11 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:10 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 10/14] kernel: Move various core kernel usages to call_rcu_lazy() Date: Fri, 19 Aug 2022 20:48:53 +0000 Message-Id: <20220819204857.3066329-11-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Joel Fernandes (Google) --- kernel/exit.c | 2 +- kernel/pid.c | 2 +- kernel/time/posix-timers.c | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/kernel/exit.c b/kernel/exit.c index 853c6a943fce..14cde19ff4c2 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -180,7 +180,7 @@ static void delayed_put_task_struct(struct rcu_head *rhp) void put_task_struct_rcu_user(struct task_struct *task) { if (refcount_dec_and_test(&task->rcu_users)) - call_rcu(&task->rcu, delayed_put_task_struct); + call_rcu_lazy(&task->rcu, delayed_put_task_struct); } void release_task(struct task_struct *p) diff --git a/kernel/pid.c b/kernel/pid.c index 2fc0a16ec77b..5a5144519d70 100644 --- a/kernel/pid.c +++ b/kernel/pid.c @@ -153,7 +153,7 @@ void free_pid(struct pid *pid) } spin_unlock_irqrestore(&pidmap_lock, flags); - call_rcu(&pid->rcu, delayed_put_pid); + call_rcu_lazy(&pid->rcu, delayed_put_pid); } struct pid *alloc_pid(struct pid_namespace *ns, pid_t *set_tid, diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c index 06d1236b3804..63489c4070cd 100644 --- a/kernel/time/posix-timers.c +++ b/kernel/time/posix-timers.c @@ -485,7 +485,7 @@ static void release_posix_timer(struct k_itimer *tmr, int it_id_set) } put_pid(tmr->it_pid); sigqueue_free(tmr->sigq); - call_rcu(&tmr->rcu, k_itimer_rcu_free); + call_rcu_lazy(&tmr->rcu, k_itimer_rcu_free); } static int common_timer_create(struct k_itimer *new_timer) From patchwork Fri Aug 19 20:48:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 12949192 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40E0AC32772 for ; Fri, 19 Aug 2022 20:49:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351843AbiHSUth (ORCPT ); Fri, 19 Aug 2022 16:49:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351603AbiHSUtS (ORCPT ); Fri, 19 Aug 2022 16:49:18 -0400 Received: from mail-qv1-xf29.google.com (mail-qv1-xf29.google.com [IPv6:2607:f8b0:4864:20::f29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46B7BBE4C2 for ; Fri, 19 Aug 2022 13:49:13 -0700 (PDT) Received: by mail-qv1-xf29.google.com with SMTP id q8so4190147qvr.9 for ; Fri, 19 Aug 2022 13:49:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=E7Xq/XCIcO6OqJQYLMUxDOI6zGFTDKoXgxMJiXC3P5I=; b=PnhSY8rM7rduW9739YQGmVnMq/JGCCmIPkz6MaFXLYg5jNAb1UrdPTDggg3S+dNGgJ 2Y7P7W5dgjp2YYRCd8rcBCIO8TU/aSkgyDY1qZlMu1yk3Y+QY+oYAUm9E8JWOIN6jBd7 9cLfbbI/yGRKAc3Y15XmddVSxaTj8LHTLbh0Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=E7Xq/XCIcO6OqJQYLMUxDOI6zGFTDKoXgxMJiXC3P5I=; b=MnmvV4cwZo0GEj6oWevyR2sImZga/V1gv63fPBlyRqjPdQ1UyVjw0IjWfrbmC75RNo iUi1blgjhMmISrkTuuEcdFdKq6yiXmPKE1mTx7PAalwz6UJbdtc8RUPB7IMWoNzckJBT nRZIQiJhxe51P44bndpkghJgybmD1Djl+qlGQSyWkAyKri+c8g3XzkF/9B76s12KUFQE FMrrwnw/pGfAF7OUHxxCA3n9Q7nBZqsReAzAK21bBFjhWImsidlBdXmF+2g0TPh3OLtG ocNd3x5Ly0AQpVUrBOP85RPOQpChkOf5Xtb5sbQtO0X9kEKnEmnlUnBB13TDqKvrrlJs i9MA== X-Gm-Message-State: ACgBeo307KxCujUJcEj95pSMwZq2hASxWb6q6Imspb+W50I/JF3bx8pT ke6cUOghmiJhZZeYWxtYsuT53w== X-Google-Smtp-Source: AA6agR5kzA0uvHG4tfTkS5Y71k+SYHXpzSvkUg40aguVGCWBXSgV6NPjLlhGB6DaqBmALd9Ti9wdHg== X-Received: by 2002:a05:6214:2aa2:b0:477:47ad:c2bf with SMTP id js2-20020a0562142aa200b0047747adc2bfmr8105366qvb.125.1660942152066; Fri, 19 Aug 2022 13:49:12 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:11 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 11/14] lib: Move call_rcu() to call_rcu_lazy() Date: Fri, 19 Aug 2022 20:48:54 +0000 Message-Id: <20220819204857.3066329-12-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Move radix-tree and xarray to call_rcu_lazy(). This is required to prevent callbacks triggering RCU machinery too quickly and too often, which adds more power to the system. Signed-off-by: Joel Fernandes (Google) --- lib/radix-tree.c | 2 +- lib/xarray.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/lib/radix-tree.c b/lib/radix-tree.c index b3afafe46fff..1526dc9e1d93 100644 --- a/lib/radix-tree.c +++ b/lib/radix-tree.c @@ -305,7 +305,7 @@ void radix_tree_node_rcu_free(struct rcu_head *head) static inline void radix_tree_node_free(struct radix_tree_node *node) { - call_rcu(&node->rcu_head, radix_tree_node_rcu_free); + call_rcu_lazy(&node->rcu_head, radix_tree_node_rcu_free); } /* diff --git a/lib/xarray.c b/lib/xarray.c index ea9ce1f0b386..230abc8045fe 100644 --- a/lib/xarray.c +++ b/lib/xarray.c @@ -257,7 +257,7 @@ static void xa_node_free(struct xa_node *node) { XA_NODE_BUG_ON(node, !list_empty(&node->private_list)); node->array = XA_RCU_FREE; - call_rcu(&node->rcu_head, radix_tree_node_rcu_free); + call_rcu_lazy(&node->rcu_head, radix_tree_node_rcu_free); } /* From patchwork Fri Aug 19 20:48:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 12949194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79D51C32792 for ; Fri, 19 Aug 2022 20:49:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351854AbiHSUti (ORCPT ); Fri, 19 Aug 2022 16:49:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48726 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351645AbiHSUtS (ORCPT ); Fri, 19 Aug 2022 16:49:18 -0400 Received: from mail-qt1-x834.google.com (mail-qt1-x834.google.com [IPv6:2607:f8b0:4864:20::834]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C34C4BFA83 for ; Fri, 19 Aug 2022 13:49:13 -0700 (PDT) Received: by mail-qt1-x834.google.com with SMTP id cr9so4200686qtb.13 for ; Fri, 19 Aug 2022 13:49:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=F8eqajEJ1oxnqHr7+lKrv0iQaE0C9fuKhV4+YpdptQc=; b=thJJdMHDttiQHWHA9YMQ6t8JnYuDXswlIU6Fd+r+vPkOe/Ek1KbSL/8SQcoZu2Q1dm a6l64ZImF9AQuS+8r2YgWRnXA993zdsdBeTloskJo8YE3oISzmLAx3TnSZz5CXa4vi2n 6l+d+3Ebcjv72aP2LdFQGNu2iIZgKzSg77Fq4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=F8eqajEJ1oxnqHr7+lKrv0iQaE0C9fuKhV4+YpdptQc=; b=xMbUaDzXG+zY9lbfBa5/97PWK3I3yg42uw5CEej62cilzxqkZ0I6yc+ArbIYo22n9A fP7HiFPv68W+xwjyxx+Ixab+FO8wL9Vyd5+mHfRY2NbU1g9O99aav4Peo3C92BYjkEXy 9mAI08HVJJ7RNqXxb/OCz0tTTTiHYGcp21347m9oi1YyGVz56Gk1fUdiRn7zRf/m5WZY ROBsov5A1y4+tz/5xUjKsWkMYydTJXON5OS7cXLWwAZVWFySEiSg5V4ArEdDlfnO5Vr9 MvtjRtDPlbpfOU0OqGJXTdSyf1yZCBG5Lx3aBIXlcDMsp8nARe/EB5h4CUp8UDtPikuq REpA== X-Gm-Message-State: ACgBeo1gK49/Fi8Y1rcS9vn2UghyKF2pPaUMntxbTTpOXoZ9v/iiN+Oe qjSj4aSE+IFNB4dJefEPhw4z0w== X-Google-Smtp-Source: AA6agR4uPFXBKcvKDD94qylED1tqBj5WVVD6PD6c49FLD/T+dqSi7r/B+UGnudkd1B7IAJicQq6Yxg== X-Received: by 2002:a05:622a:1048:b0:344:9494:3617 with SMTP id f8-20020a05622a104800b0034494943617mr6809664qte.143.1660942152841; Fri, 19 Aug 2022 13:49:12 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:12 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 12/14] i915: Move call_rcu() to call_rcu_lazy() Date: Fri, 19 Aug 2022 20:48:55 +0000 Message-Id: <20220819204857.3066329-13-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This is required to prevent callbacks triggering RCU machinery too quickly and too often, which adds more power to the system. Signed-off-by: Joel Fernandes (Google) --- drivers/gpu/drm/i915/gem/i915_gem_object.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index 06b1b188ce5a..74f4b6e707c2 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -343,7 +343,7 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915, __i915_gem_free_object(obj); /* But keep the pointer alive for RCU-protected lookups */ - call_rcu(&obj->rcu, __i915_gem_free_object_rcu); + call_rcu_lazy(&obj->rcu, __i915_gem_free_object_rcu); cond_resched(); } } From patchwork Fri Aug 19 20:48:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 12949195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D165FC32771 for ; Fri, 19 Aug 2022 20:49:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351630AbiHSUtj (ORCPT ); Fri, 19 Aug 2022 16:49:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48668 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351654AbiHSUtS (ORCPT ); Fri, 19 Aug 2022 16:49:18 -0400 Received: from mail-qt1-x829.google.com (mail-qt1-x829.google.com [IPv6:2607:f8b0:4864:20::829]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 82EFFBFAA5 for ; Fri, 19 Aug 2022 13:49:14 -0700 (PDT) Received: by mail-qt1-x829.google.com with SMTP id w28so4225011qtc.7 for ; Fri, 19 Aug 2022 13:49:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=eOVP7uxcuakWCkD5+BqZt5+xcwwCNJthr/gxNxOlHcg=; b=TbqfUYYOBRlQM1AsIY4GgE+2GpQ/GkZpdyKoO5F732/uJp6Il9dKitV7sHVM1cA9rC RRFRRBgNtpaHAmgnmmdaHwkZAVJ47CrSC6J+Z5mx1DKVwYKo7bCgePitrBl0iQ1ONj5z kVNnqB403PU8vOH/tLiG3e5URPaaC2NwHeDgo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=eOVP7uxcuakWCkD5+BqZt5+xcwwCNJthr/gxNxOlHcg=; b=Tsc2v5YsZh0vDWqPZYCwnlUDzBboAAG+srfjwJ4W7sO0lngee9Y42suL/6jVZSAnIf 3nhxbpEEw5+Th9o1t2daeoWeBtRg5N7X9FD9eDzzt87FZORa6Xy3MPfMgvE3sWZgOXVp VWcbAKHGKcAYxl+ElwZkWo0QiiomnJ+dlFU67/ZqvkO9tnfmiY721/HMgggd0JxpVzA1 wxITJbEBNRm28QidXdn+kBNBYMBnJksrpk+gvnEkdcDoMWgKnFOvjOWfqF6W073883bj YRYE9ploCvt8y5djxo7uJUiHNBWuDVnbFyUzKzDePK1DSQ9ZKKswEy6W9GeKzhjRlyk/ UHxA== X-Gm-Message-State: ACgBeo3bvH7Bw3zCf0vDQ6XxUUZND5t9zgAn6JlHs5gqimhNkEiv0hAM W7xhrIMtLnFjanE2ZG94IWT7rw== X-Google-Smtp-Source: AA6agR4jfVHEKJ7HKI1Pca8LM6+bM5tC6NBQKXVudmLfwSZfaIKrw8QW130ggamnP9whdSSpTKpeRA== X-Received: by 2002:ac8:5889:0:b0:344:57e5:dc54 with SMTP id t9-20020ac85889000000b0034457e5dc54mr8038436qta.465.1660942153560; Fri, 19 Aug 2022 13:49:13 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:13 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 13/14] fork: Move thread_stack_free_rcu to call_rcu_lazy Date: Fri, 19 Aug 2022 20:48:56 +0000 Message-Id: <20220819204857.3066329-14-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This is required to prevent callbacks triggering RCU machinery too quickly and too often, which adds more power to the system. Signed-off-by: Joel Fernandes (Google) --- kernel/fork.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/kernel/fork.c b/kernel/fork.c index c9a2e19d67e5..a4535cf5446f 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -227,7 +227,7 @@ static void thread_stack_delayed_free(struct task_struct *tsk) struct vm_stack *vm_stack = tsk->stack; vm_stack->stack_vm_area = tsk->stack_vm_area; - call_rcu(&vm_stack->rcu, thread_stack_free_rcu); + call_rcu_lazy(&vm_stack->rcu, thread_stack_free_rcu); } static int free_vm_stack_cache(unsigned int cpu) @@ -354,7 +354,7 @@ static void thread_stack_delayed_free(struct task_struct *tsk) { struct rcu_head *rh = tsk->stack; - call_rcu(rh, thread_stack_free_rcu); + call_rcu_lazy(rh, thread_stack_free_rcu); } static int alloc_thread_stack_node(struct task_struct *tsk, int node) @@ -389,7 +389,7 @@ static void thread_stack_delayed_free(struct task_struct *tsk) { struct rcu_head *rh = tsk->stack; - call_rcu(rh, thread_stack_free_rcu); + call_rcu_lazy(rh, thread_stack_free_rcu); } static int alloc_thread_stack_node(struct task_struct *tsk, int node) From patchwork Fri Aug 19 20:48:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 12949196 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1467C32771 for ; Fri, 19 Aug 2022 20:49:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351875AbiHSUtn (ORCPT ); Fri, 19 Aug 2022 16:49:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48696 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351664AbiHSUtS (ORCPT ); Fri, 19 Aug 2022 16:49:18 -0400 Received: from mail-qt1-x834.google.com (mail-qt1-x834.google.com [IPv6:2607:f8b0:4864:20::834]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3D454BFC4A for ; Fri, 19 Aug 2022 13:49:15 -0700 (PDT) Received: by mail-qt1-x834.google.com with SMTP id cb8so4254477qtb.0 for ; Fri, 19 Aug 2022 13:49:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=6M/kEQXI6tA+u5+q0q3BfPnsnjCNXvNhf8n1k8Caals=; b=G4RH/9CKm7jJBfWPa3xTDK0qYze8HQ31EQsg72UX6wDKbasiRRCXhvuugbCwbLm31P 5TGYIRvZ06nuvqEvU0IRARdrxjuSNS1BbQ/ZZrrV3Jm4MJxp/6fa30p/OFvXnu5mRbSo aOHwkSMPTBQ0nsadszNSKzfIkSPtFP8lxf7mI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=6M/kEQXI6tA+u5+q0q3BfPnsnjCNXvNhf8n1k8Caals=; b=rAla4wglKND1jfE+6chX+lgd+76hBKgOY4bNEp4IDpYcxD70T6DwrdIPE+29Rq6c9w kNx8LibaStB5YTfB7QR8Wgw2WBOiT63mjDc0pX84gsj0sR/8Fc+cU9F7VIsQDptaH6Wx fkjOp919R+0rLahNIFHlrd9S44BMeTiOTpXEhf2xcE05rP+5M1EU48I/2bFS3eabL5Az I2IqnNbGCLb0DF1p30KCOPySUjWRhQ/vMXliLB3y8+fi3W0njnpvzkLcJlq6gey8UnHW IAKZSbY/qF8Cbguj9yKCdY4p/lz2yBldvINtun48LiXmcqnfY6YzlF4e4pyqXkQDLrIi wYYQ== X-Gm-Message-State: ACgBeo02gsbFOMiP78FbemiVZ2uGZJmX4nAe1NcN5G4lfWxPzu9dJLg2 tJWx/3BFV8vN4K9RqzsNWPGk7g== X-Google-Smtp-Source: AA6agR5WaU1OuwtlNb5E8x5UWuI4aQIOzMrkS+bdJ4qsVwxVLpQSeA2Msl6wE+7uTnDjr6siJ+hX/Q== X-Received: by 2002:ac8:5f0d:0:b0:343:6e79:f1a2 with SMTP id x13-20020ac85f0d000000b003436e79f1a2mr7981121qta.657.1660942154287; Fri, 19 Aug 2022 13:49:14 -0700 (PDT) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id x8-20020a05620a258800b006b9a89d408csm4377123qko.100.2022.08.19.13.49.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 13:49:14 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , paulmck@kernel.org, Rushikesh S Kadam , "Uladzislau Rezki (Sony)" , Neeraj upadhyay , Frederic Weisbecker , Steven Rostedt , rcu , vineeth@bitbyteword.org Subject: [PATCH v4 14/14] rcu/tree: Move trace_rcu_callback() before bypassing Date: Fri, 19 Aug 2022 20:48:57 +0000 Message-Id: <20220819204857.3066329-15-joel@joelfernandes.org> X-Mailer: git-send-email 2.37.2.609.g9ff673ca1a-goog In-Reply-To: <20220819204857.3066329-1-joel@joelfernandes.org> References: <20220819204857.3066329-1-joel@joelfernandes.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org If any CB is queued into the bypass list, then trace_rcu_callback() does not show it. This makes it not clear when a callback was actually queued, as you only end up getting a trace_rcu_invoke_callback() trace. Fix it by moving trace_rcu_callback() before trace_rcu_nocb_try_bypass(). Signed-off-by: Joel Fernandes (Google) --- kernel/rcu/tree.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 67026382dc21..6e14f0257669 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3101,10 +3101,7 @@ __call_rcu_common(struct rcu_head *head, rcu_callback_t func, bool lazy) } check_cb_ovld(rdp); - if (rcu_nocb_try_bypass(rdp, head, &was_alldone, flags, lazy)) - return; // Enqueued onto ->nocb_bypass, so just leave. - // If no-CBs CPU gets here, rcu_nocb_try_bypass() acquired ->nocb_lock. - rcu_segcblist_enqueue(&rdp->cblist, head); + if (__is_kvfree_rcu_offset((unsigned long)func)) trace_rcu_kvfree_callback(rcu_state.name, head, (unsigned long)func, @@ -3113,6 +3110,11 @@ __call_rcu_common(struct rcu_head *head, rcu_callback_t func, bool lazy) trace_rcu_callback(rcu_state.name, head, rcu_segcblist_n_cbs(&rdp->cblist)); + if (rcu_nocb_try_bypass(rdp, head, &was_alldone, flags, lazy)) + return; // Enqueued onto ->nocb_bypass, so just leave. + // If no-CBs CPU gets here, rcu_nocb_try_bypass() acquired ->nocb_lock. + rcu_segcblist_enqueue(&rdp->cblist, head); + trace_rcu_segcb_stats(&rdp->cblist, TPS("SegCBQueued")); /* Go handle any RCU core processing required. */