@@ -228,9 +228,18 @@ static inline int pmd_dirty(pmd_t pmd)
* it must synchronize the delayed page table writes properly on other CPUs.
*/
#ifndef __HAVE_ARCH_ENTER_LAZY_MMU_MODE
-#define arch_enter_lazy_mmu_mode() do {} while (0)
-#define arch_leave_lazy_mmu_mode() do {} while (0)
-#define arch_flush_lazy_mmu_mode() do {} while (0)
+static inline void arch_enter_lazy_mmu_mode(void)
+{
+ VM_WARN_ON(preemptible());
+}
+static inline void arch_leave_lazy_mmu_mode(void)
+{
+ VM_WARN_ON(preemptible());
+}
+static inline void arch_flush_lazy_mmu_mode(void)
+{
+ VM_WARN_ON(preemptible());
+}
#endif
#ifndef pte_batch_hint
The lazy MMU batching may be only be entered and left under the protection of the page table locks for all page tables which may be modified. Yet, there were cases arch_enter_lazy_mmu_mode() was called without the locks taken, e.g. commit b9ef323ea168 ("powerpc/64s: Disable preemption in hash lazy mmu mode"). Make default arch_enter|leave|flush_lazy_mmu_mode() callbacks complain at least in case the preemption is enabled to detect wrong contexts. Most platforms do not implement the callbacks, so to aovid a performance impact allow the complaint when CONFIG_DEBUG_VM option is enabled only. Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> --- include/linux/pgtable.h | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-)