diff mbox series

[1/1] rcu/kvfree: Do not run a page work if a cache is disabled

Message ID 20230411131341.9910-1-urezki@gmail.com (mailing list archive)
State Accepted
Commit 5e433764beec0134a9a677f399a6e4539eb8870d
Headers show
Series [1/1] rcu/kvfree: Do not run a page work if a cache is disabled | expand

Commit Message

Uladzislau Rezki April 11, 2023, 1:13 p.m. UTC
By default the cache size is 5 pages per-cpu. But it can
be disabled at boot time by setting the rcu_min_cached_objs
to zero.

Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
 kernel/rcu/tree.c | 4 ++++
 1 file changed, 4 insertions(+)

Comments

Paul E. McKenney April 12, 2023, 4:49 a.m. UTC | #1
On Tue, Apr 11, 2023 at 03:13:41PM +0200, Uladzislau Rezki (Sony) wrote:
> By default the cache size is 5 pages per-cpu. But it can
> be disabled at boot time by setting the rcu_min_cached_objs
> to zero.
> 
> Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>

That does get rid of a needless hrtimer &c in that case, good!

I have queued this with the usual wordsmithing below, so please check
it.

							Thanx, Paul

------------------------------------------------------------------------

commit 5e433764beec0134a9a677f399a6e4539eb8870d
Author: Uladzislau Rezki (Sony) <urezki@gmail.com>
Date:   Tue Apr 11 15:13:41 2023 +0200

    rcu/kvfree: Do not run a page work if a cache is disabled
    
    By default the cache size is 5 pages per CPU, but it can be disabled at
    boot time by setting the rcu_min_cached_objs to zero.  When that happens,
    the current code will uselessly set an hrtimer to schedule refilling this
    cache with zero pages.  This commit therefore streamlines this process
    by simply refusing the set the hrtimer when rcu_min_cached_objs is zero.
    
    Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
    Signed-off-by: Paul E. McKenney <paulmck@kernel.org>

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 41daae3239b5..f855d2a85597 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3247,6 +3247,10 @@ static void fill_page_cache_func(struct work_struct *work)
 static void
 run_page_cache_worker(struct kfree_rcu_cpu *krcp)
 {
+	// If cache disabled, bail out.
+	if (!rcu_min_cached_objs)
+		return;
+
 	if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING &&
 			!atomic_xchg(&krcp->work_in_progress, 1)) {
 		if (atomic_read(&krcp->backoff_page_cache_fill)) {
diff mbox series

Patch

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 35be35f8236b..21e3d9dffde5 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3246,6 +3246,10 @@  static void fill_page_cache_func(struct work_struct *work)
 static void
 run_page_cache_worker(struct kfree_rcu_cpu *krcp)
 {
+	// If cache disabled, bail out.
+	if (!rcu_min_cached_objs)
+		return;
+
 	if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING &&
 			!atomic_xchg(&krcp->work_in_progress, 1)) {
 		if (atomic_read(&krcp->backoff_page_cache_fill)) {