diff mbox series

[RT,1/3] asm-generic/preempt: also check preempt_lazy_count for should_resched() etc.

Message ID 20230510162406.1955-2-jszhang@kernel.org (mailing list archive)
State Handled Elsewhere
Headers show
Series riscv: add PREEMPT_RT support | expand

Checks

Context Check Description
conchuod/cover_letter success Series has a cover letter
conchuod/tree_selection success Guessed tree name to be for-next at HEAD ac9a78681b92
conchuod/fixes_present success Fixes tag not required for -next series
conchuod/maintainers_pattern success MAINTAINERS pattern errors before the patch: 6 and now 6
conchuod/verify_signedoff success Signed-off-by tag matches author and committer
conchuod/kdoc success Errors and warnings before: 0 this patch: 0
conchuod/build_rv64_clang_allmodconfig success Errors and warnings before: 2849 this patch: 2849
conchuod/module_param success Was 0 now: 0
conchuod/build_rv64_gcc_allmodconfig success Errors and warnings before: 16380 this patch: 16380
conchuod/build_rv32_defconfig success Build OK
conchuod/dtb_warn_rv64 success Errors and warnings before: 3 this patch: 3
conchuod/header_inline success No static functions without inline keyword in header files
conchuod/checkpatch success total: 0 errors, 0 warnings, 0 checks, 26 lines checked
conchuod/build_rv64_nommu_k210_defconfig success Build OK
conchuod/verify_fixes success No Fixes tag
conchuod/build_rv64_nommu_virt_defconfig success Build OK

Commit Message

Jisheng Zhang May 10, 2023, 4:24 p.m. UTC
lazy preempt count is great mechanism to help ordinary SCHED_OTHER
tasks' throughput Under PREEMPT_RT. But current implementation relies
on each arch-specific code to check the preempt_lazy_count in
should_resched() and __preempt_count_dec_and_test(), if the arch, e.g
riscv use the asm-generic preempt implementation, it losts the great
lazy preempt mechanism.

Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
---
 include/asm-generic/preempt.h | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/include/asm-generic/preempt.h b/include/asm-generic/preempt.h
index b4d43a4af5f7..b583e7c38ccf 100644
--- a/include/asm-generic/preempt.h
+++ b/include/asm-generic/preempt.h
@@ -59,6 +59,11 @@  static __always_inline void __preempt_count_sub(int val)
 	*preempt_count_ptr() -= val;
 }
 
+#ifdef CONFIG_PREEMPT_LAZY
+#define preempt_lazy_count()		(current_thread_info()->preempt_lazy_count)
+#else
+#define preempt_lazy_count()		(0)
+#endif
 static __always_inline bool __preempt_count_dec_and_test(void)
 {
 	/*
@@ -66,7 +71,7 @@  static __always_inline bool __preempt_count_dec_and_test(void)
 	 * operations; we cannot use PREEMPT_NEED_RESCHED because it might get
 	 * lost.
 	 */
-	return !--*preempt_count_ptr() && tif_need_resched();
+	return !--*preempt_count_ptr() && !preempt_lazy_count() && tif_need_resched();
 }
 
 /*
@@ -75,6 +80,7 @@  static __always_inline bool __preempt_count_dec_and_test(void)
 static __always_inline bool should_resched(int preempt_offset)
 {
 	return unlikely(preempt_count() == preempt_offset &&
+			!preempt_lazy_count() &&
 			tif_need_resched());
 }