diff mbox series

[3/3] lockdep: Use line-buffered printk() for lockdep messages.

Message ID 1541165517-3557-3-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp (mailing list archive)
State New, archived
Headers show
Series [v6,1/3] printk: Add line-buffered printk() API. | expand

Commit Message

Tetsuo Handa Nov. 2, 2018, 1:31 p.m. UTC
syzbot is sometimes getting mixed output like below due to concurrent
printk(). Mitigate such output by using line-buffered printk() API.

  RCU used illegally from idle CPU!
  rcu_scheduler_active = 2, debug_locks = 1
  RSP: 0018:ffffffff88007bb8 EFLAGS: 00000286
  RCU used illegally from extended quiescent state!
   ORIG_RAX: ffffffffffffff13
  1 lock held by swapper/1/0:
  RAX: dffffc0000000000 RBX: 1ffffffff1000f7b RCX: 0000000000000000
   #0: 
  RDX: 1ffffffff10237b8 RSI: 0000000000000001 RDI: ffffffff8811bdc0
  000000004b34587c
  RBP: ffffffff88007bb8 R08: ffffffff88075e00 R09: 0000000000000000
   (
  R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
  rcu_read_lock
  R13: ffffffff88007c78 R14: 0000000000000000 R15: 0000000000000000
  ){....}
   arch_safe_halt arch/x86/include/asm/paravirt.h:94 [inline]
   default_idle+0xc2/0x410 arch/x86/kernel/process.c:498
  , at: trace_call_bpf+0xf8/0x640 kernel/trace/bpf_trace.c:46

Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 kernel/locking/lockdep.c | 268 +++++++++++++++++++++++++++--------------------
 1 file changed, 155 insertions(+), 113 deletions(-)

Comments

Peter Zijlstra Nov. 2, 2018, 1:36 p.m. UTC | #1
On Fri, Nov 02, 2018 at 10:31:57PM +0900, Tetsuo Handa wrote:
> syzbot is sometimes getting mixed output like below due to concurrent
> printk(). Mitigate such output by using line-buffered printk() API.
> 
>   RCU used illegally from idle CPU!
>   rcu_scheduler_active = 2, debug_locks = 1
>   RSP: 0018:ffffffff88007bb8 EFLAGS: 00000286
>   RCU used illegally from extended quiescent state!
>    ORIG_RAX: ffffffffffffff13
>   1 lock held by swapper/1/0:
>   RAX: dffffc0000000000 RBX: 1ffffffff1000f7b RCX: 0000000000000000
>    #0: 
>   RDX: 1ffffffff10237b8 RSI: 0000000000000001 RDI: ffffffff8811bdc0
>   000000004b34587c
>   RBP: ffffffff88007bb8 R08: ffffffff88075e00 R09: 0000000000000000
>    (
>   R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
>   rcu_read_lock
>   R13: ffffffff88007c78 R14: 0000000000000000 R15: 0000000000000000
>   ){....}
>    arch_safe_halt arch/x86/include/asm/paravirt.h:94 [inline]
>    default_idle+0xc2/0x410 arch/x86/kernel/process.c:498
>   , at: trace_call_bpf+0xf8/0x640 kernel/trace/bpf_trace.c:46

WTH is that buffered aPI, and no, that breaks my earlyprintk stuff.
Tetsuo Handa Nov. 3, 2018, 2 a.m. UTC | #2
On 2018/11/02 22:36, Peter Zijlstra wrote:
> On Fri, Nov 02, 2018 at 10:31:57PM +0900, Tetsuo Handa wrote:
>> syzbot is sometimes getting mixed output like below due to concurrent
>> printk(). Mitigate such output by using line-buffered printk() API.
>>
>>   RCU used illegally from idle CPU!
>>   rcu_scheduler_active = 2, debug_locks = 1
>>   RSP: 0018:ffffffff88007bb8 EFLAGS: 00000286
>>   RCU used illegally from extended quiescent state!
>>    ORIG_RAX: ffffffffffffff13
>>   1 lock held by swapper/1/0:
>>   RAX: dffffc0000000000 RBX: 1ffffffff1000f7b RCX: 0000000000000000
>>    #0: 
>>   RDX: 1ffffffff10237b8 RSI: 0000000000000001 RDI: ffffffff8811bdc0
>>   000000004b34587c
>>   RBP: ffffffff88007bb8 R08: ffffffff88075e00 R09: 0000000000000000
>>    (
>>   R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
>>   rcu_read_lock
>>   R13: ffffffff88007c78 R14: 0000000000000000 R15: 0000000000000000
>>   ){....}
>>    arch_safe_halt arch/x86/include/asm/paravirt.h:94 [inline]
>>    default_idle+0xc2/0x410 arch/x86/kernel/process.c:498
>>   , at: trace_call_bpf+0xf8/0x640 kernel/trace/bpf_trace.c:46
> 
> WTH is that buffered aPI, and no, that breaks my earlyprintk stuff.
> 

This API is nothing but a wrapper for reducing frequency of directly
calling printk() by using snprintf() if possible. Thus, whatever your
earlyprintk stuff is, this API should not affect it.
Petr Mladek Nov. 6, 2018, 8:38 a.m. UTC | #3
On Fri 2018-11-02 14:36:29, Peter Zijlstra wrote:
> On Fri, Nov 02, 2018 at 10:31:57PM +0900, Tetsuo Handa wrote:
> > syzbot is sometimes getting mixed output like below due to concurrent
> > printk(). Mitigate such output by using line-buffered printk() API.
> > 
> >   RCU used illegally from idle CPU!
> >   rcu_scheduler_active = 2, debug_locks = 1
> >   RSP: 0018:ffffffff88007bb8 EFLAGS: 00000286
> >   RCU used illegally from extended quiescent state!
> >    ORIG_RAX: ffffffffffffff13
> >   1 lock held by swapper/1/0:
> >   RAX: dffffc0000000000 RBX: 1ffffffff1000f7b RCX: 0000000000000000
> >    #0: 
> >   RDX: 1ffffffff10237b8 RSI: 0000000000000001 RDI: ffffffff8811bdc0
> >   000000004b34587c
> >   RBP: ffffffff88007bb8 R08: ffffffff88075e00 R09: 0000000000000000
> >    (
> >   R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
> >   rcu_read_lock
> >   R13: ffffffff88007c78 R14: 0000000000000000 R15: 0000000000000000
> >   ){....}
> >    arch_safe_halt arch/x86/include/asm/paravirt.h:94 [inline]
> >    default_idle+0xc2/0x410 arch/x86/kernel/process.c:498
> >   , at: trace_call_bpf+0xf8/0x640 kernel/trace/bpf_trace.c:46
> 
> WTH is that buffered aPI, and no, that breaks my earlyprintk stuff.

The API is supposed to replace the existing unreliable and tricky code,
see struct cont and KERN_CONT flag in kernel/printk/printk.c.

The solution has been discussed many times. The preferred solution
is to use explicit buffers instead of having many
per-cpu/per-context ones and auto magically switching between them.

The original idea was to use buffers on stack. But the stack is
limited and people would need to guess the length.

This current approach uses small pool of buffers and simple logic
to get/put one. If any buffer is not available, it falls back
to direct printk(), see
https://lkml.kernel.org/r/1541165517-3557-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp

If you would want to avoid buffering, you could set the number
of buffers to zero. Then it would always fallback to
the direct printk().

Best Regards,
Petr
Sergey Senozhatsky Nov. 6, 2018, 9:05 a.m. UTC | #4
On (11/06/18 09:38), Petr Mladek wrote:
> 
> If you would want to avoid buffering, you could set the number
> of buffers to zero. Then it would always fallback to
> the direct printk().

This printk-fallback makes me wonder if 'cont' really can ever go away.
We would totally break cont printk-s that trapped into printk-fallback;
as opposed to current sometimes-cont-works-just-fine.

	-ss
Tetsuo Handa Nov. 6, 2018, 9:56 a.m. UTC | #5
On 2018/11/06 17:38, Petr Mladek wrote:
> If you would want to avoid buffering, you could set the number
> of buffers to zero. Then it would always fallback to
> the direct printk().

  1 lock held by swapper/1/0:
   #0: 
   (
  rcu_read_lock
  ){....}
  , at: trace_call_bpf+0xf8/0x640 kernel/trace/bpf_trace.c:46

is not welcomed and

  1 lock held by swapper/1/0:
   #0:  (rcu_read_lock){....}, at: trace_call_bpf+0xf8/0x640 kernel/trace/bpf_trace.c:46

is welcomed.

If you want to avoid fallback to direct printk(), please allocate on-stack
buffer with appropriate size. Since lockdep splat may happen when kernel
stack is already tight, blindly allocating large buffer on the stack is
not good.
Petr Mladek Nov. 6, 2018, 12:57 p.m. UTC | #6
On Tue 2018-11-06 18:05:44, Sergey Senozhatsky wrote:
> On (11/06/18 09:38), Petr Mladek wrote:
> > 
> > If you would want to avoid buffering, you could set the number
> > of buffers to zero. Then it would always fallback to
> > the direct printk().

This comment was a hint for Peter and his workarounds. He ignores most
of printk() code and writes messages directly to the serial console.


> This printk-fallback makes me wonder if 'cont' really can ever go away.
> We would totally break cont printk-s that trapped into printk-fallback;
> as opposed to current sometimes-cont-works-just-fine.

It could break things totally only when the new approach completely
fails. I you have any doubts or suggestions then please comment on
the patch adding the API.

Best Regards,
Petr
Petr Mladek Nov. 7, 2018, 3:19 p.m. UTC | #7
On Fri 2018-11-02 22:31:57, Tetsuo Handa wrote:
> syzbot is sometimes getting mixed output like below due to concurrent
> printk(). Mitigate such output by using line-buffered printk() API.
> 
> @@ -2421,18 +2458,20 @@ static void check_chain_key(struct task_struct *curr)
>  print_usage_bug_scenario(struct held_lock *lock)
>  {
>  	struct lock_class *class = hlock_class(lock);
> +	struct printk_buffer *buf = get_printk_buffer();
>  
>  	printk(" Possible unsafe locking scenario:\n\n");
>  	printk("       CPU0\n");
>  	printk("       ----\n");
> -	printk("  lock(");
> -	__print_lock_name(class);
> -	printk(KERN_CONT ");\n");
> +	printk_buffered(buf, "  lock(");
> +	__print_lock_name(class, buf);
> +	printk_buffered(buf, ");\n");
>  	printk("  <Interrupt>\n");
> -	printk("    lock(");
> -	__print_lock_name(class);
> -	printk(KERN_CONT ");\n");
> +	printk_buffered(buf, "    lock(");
> +	__print_lock_name(class, buf);
> +	printk_buffered(buf, ");\n");
>  	printk("\n *** DEADLOCK ***\n\n");
> +	put_printk_buffer(buf);
>  }
>  
>  static int

I really hope that the maze of pr_cont() calls in lockdep.c is the most
complicated one that we would meet.

Anyway, the following comes to my mind:

1. The mixing of normal and buffered printk calls is a bit confusing
   and error prone. It would make sense to use the buffered printk
   everywhere in the given section of code even when it is not
   strictly needed.

2. I would replace "buf" with "pbuf" or "prbuf" to distinguish it a
   bit from other eventual buffers.


Best Regards,
Petr
Sergey Senozhatsky Nov. 8, 2018, 4:45 a.m. UTC | #8
On (11/07/18 16:19), Petr Mladek wrote:
> > syzbot is sometimes getting mixed output like below due to concurrent
> > printk(). Mitigate such output by using line-buffered printk() API.
> > 
> > @@ -2421,18 +2458,20 @@ static void check_chain_key(struct task_struct *curr)
> >  print_usage_bug_scenario(struct held_lock *lock)
> >  {
> >  	struct lock_class *class = hlock_class(lock);
> > +	struct printk_buffer *buf = get_printk_buffer();
> >  
> >  	printk(" Possible unsafe locking scenario:\n\n");
> >  	printk("       CPU0\n");
> >  	printk("       ----\n");
> > -	printk("  lock(");
> > -	__print_lock_name(class);
> > -	printk(KERN_CONT ");\n");
> > +	printk_buffered(buf, "  lock(");
> > +	__print_lock_name(class, buf);
> > +	printk_buffered(buf, ");\n");
> >  	printk("  <Interrupt>\n");
> > -	printk("    lock(");
> > -	__print_lock_name(class);
> > -	printk(KERN_CONT ");\n");
> > +	printk_buffered(buf, "    lock(");
> > +	__print_lock_name(class, buf);
> > +	printk_buffered(buf, ");\n");
> >  	printk("\n *** DEADLOCK ***\n\n");
> > +	put_printk_buffer(buf);
> >  }
> >  
> >  static int
> 
> I really hope that the maze of pr_cont() calls in lockdep.c is the most
> complicated one that we would meet.

Hmm... Yes, buffered/seq_buf printk looks like a hard-to-use API,
when it comes to real world cases like this.

So... here is a random and wild idea.

We actually already have an easy-to-use buffered printk. And it's per-CPU.
And it makes all printk-s on this CPU to behave like as if they were called
on UP system. And it cures pr_cont(). And it doesn't require anyone to learn
any new printk API names. And it doesn't require any additional maintenance
work. And it doesn't require any printk->buffered_printk conversions. And
it's already in the kernel. And we gave it a name. And it's printk_safe.

a) lockdep reporting path should be atomic. And it's not a hot path,
   so local_irq_save/local_irq_restore will not cause a lot of trouble
   there probably.

b) We already have some lockdep reports coming via printk_safe.
   All those
	printk->console_driver->scheduler->lockdep
	printk->console_driver->timekeeping->lockdep
	etc.

   came via printk_safe path. So it's not a complete novelty.

c) printk_safe sections can nest.

d) No premature flushes. Everything looks the way it was supposed to
   look.

e) There are no out-of-line printk-s. We keep the actual order of events.

f) We flush it on panic.

g) Low maintenance costs.

So, can we just do the following? /* a sketch */

lockdep.c
	printk_safe_enter_irqsave(flags);
	lockdep_report();
	printk_safe_exit_irqrestore(flags);

	-ss
Tetsuo Handa Nov. 8, 2018, 11:37 a.m. UTC | #9
On 2018/11/08 13:45, Sergey Senozhatsky wrote:
> So, can we just do the following? /* a sketch */
> 
> lockdep.c
> 	printk_safe_enter_irqsave(flags);
> 	lockdep_report();
> 	printk_safe_exit_irqrestore(flags);

If buffer size were large enough to hold messages from out_of_memory(),
I would like to use it for out_of_memory() because delaying SIGKILL
due to waiting for printk() to complete is not good. Surely we can't
hold all messages because amount from dump_tasks() is unpredictable.
Maybe we can hold all messages from dump_header() except dump_tasks().

But isn't it essentially same with
http://lkml.kernel.org/r/1493560477-3016-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp
which Linus does not want?
Petr Mladek Nov. 8, 2018, 11:53 a.m. UTC | #10
On Thu 2018-11-08 13:45:10, Sergey Senozhatsky wrote:
> On (11/07/18 16:19), Petr Mladek wrote:
> > I really hope that the maze of pr_cont() calls in lockdep.c is the most
> > complicated one that we would meet.
> 
> Hmm... Yes, buffered/seq_buf printk looks like a hard-to-use API,
> when it comes to real world cases like this.
>
> So... here is a random and wild idea.
> 
> We actually already have an easy-to-use buffered printk. And it's per-CPU.
> And it makes all printk-s on this CPU to behave like as if they were called
> on UP system. And it cures pr_cont(). And it doesn't require anyone to learn
> any new printk API names. And it doesn't require any additional maintenance
> work. And it doesn't require any printk->buffered_printk conversions. And
> it's already in the kernel. And we gave it a name. And it's printk_safe.
> 
> a) lockdep reporting path should be atomic. And it's not a hot path,
>    so local_irq_save/local_irq_restore will not cause a lot of trouble
>    there probably.
> 
> b) We already have some lockdep reports coming via printk_safe.
>    All those
> 	printk->console_driver->scheduler->lockdep
> 	printk->console_driver->timekeeping->lockdep
> 	etc.
> 
>    came via printk_safe path. So it's not a complete novelty.
> 
> c) printk_safe sections can nest.
> 
> d) No premature flushes. Everything looks the way it was supposed to
>    look.
> 
> e) There are no out-of-line printk-s. We keep the actual order of events.
> 
> f) We flush it on panic.
> 
> g) Low maintenance costs.
> 
> So, can we just do the following? /* a sketch */
> 
> lockdep.c
> 	printk_safe_enter_irqsave(flags);
> 	lockdep_report();
> 	printk_safe_exit_irqrestore(flags);

All this looks nice. Let's look it also from the other side.
The following comes to my mind:

a) lockdep is not the only place when continuous lines get mixed.
   This patch mentions also RCU stalls. The other patch mentions
   OOM. I am sure that there will be more.

b) It is not obvious where printk_safe() would be necessary.
   While buffered printk is clearly connected with continuous
   lines.

c) I am not sure that disabling preemption would always be
   acceptable.

d) We might need to increase the size of the per-CPU buffers if
   they are used more widely.

e) People would need to learn a new (printk_safe) API when it is
   use outside printk sources.

f) Losing the entire log is more painful than loosing one line
   when the buffer never gets flushed.


Sigh, no solution is perfect. If only we could agree that one
way was better than the other.

Best Regards,
Petr
Sergey Senozhatsky Nov. 8, 2018, 12:44 p.m. UTC | #11
On (11/08/18 12:53), Petr Mladek wrote:
> > lockdep.c
> > 	printk_safe_enter_irqsave(flags);
> > 	lockdep_report();
> > 	printk_safe_exit_irqrestore(flags);
> 
> All this looks nice. Let's look it also from the other side.
> The following comes to my mind:
> 
> a) lockdep is not the only place when continuous lines get mixed.
>    This patch mentions also RCU stalls. The other patch mentions
>    OOM. I am sure that there will be more.
> 
> b) It is not obvious where printk_safe() would be necessary.
>    While buffered printk is clearly connected with continuous
>    lines.
> 
> c) I am not sure that disabling preemption would always be
>    acceptable.
> 
> d) We might need to increase the size of the per-CPU buffers if
>    they are used more widely.
> 
> e) People would need to learn a new (printk_safe) API when it is
>    use outside printk sources.
> 
> f) Losing the entire log is more painful than loosing one line
>    when the buffer never gets flushed.
> 
> Sigh, no solution is perfect. If only we could agree that one
> way was better than the other.

I agree with what you are saying. All of the above (in my email)
was for lockdep only, that's why I did mention lockdep several times.
Like I said, a random and wild idea.
I'm not proposing printk_safe as a "better" buffered printk for
everyone. The buffered_printk patch is pretty big, and comes with a
price tag.

If lockdep and OOM people will ACK buffered printk transition in
its current form, then we can go ahead.

It's debatable if we need a fixed size list of buffers; or we can
do kmalloc()+cont fallback. But if we will have ACKs, then we can
move forward.

	-ss
Sergey Senozhatsky Nov. 8, 2018, 2:21 p.m. UTC | #12
On (11/08/18 21:44), Sergey Senozhatsky wrote:
> 
> If lockdep and OOM people will ACK buffered printk transition in
> its current form, then we can go ahead.

That printk_safe approach in lockdep, BTW, does not change (convert)
any printk-s within lockdep, thus Peter's earlycon should not be
affected. So Peter will have earlycon working, syzbot folks will have
buffered lockdep print outs.

	-ss
Sergey Senozhatsky Nov. 9, 2018, 6:12 a.m. UTC | #13
On (11/08/18 20:37), Tetsuo Handa wrote:
> On 2018/11/08 13:45, Sergey Senozhatsky wrote:
> > So, can we just do the following? /* a sketch */
> > 
> > lockdep.c
> > 	printk_safe_enter_irqsave(flags);
> > 	lockdep_report();
> > 	printk_safe_exit_irqrestore(flags);
> 
> If buffer size were large enough to hold messages from out_of_memory(),
> I would like to use it for out_of_memory() because delaying SIGKILL
> due to waiting for printk() to complete is not good. Surely we can't
> hold all messages because amount from dump_tasks() is unpredictable.
> Maybe we can hold all messages from dump_header() except dump_tasks().
> 
> But isn't it essentially same with
> http://lkml.kernel.org/r/1493560477-3016-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp
> which Linus does not want?

Dunno. I guess we still haven't heard from Linus because he did quite a good
job setting up his 'email filters' ;)

Converting the existing users to buffered printk is not so simple.
Apparently there are different paths; some can afford buffered printk, some
cannot. Some of 'cont' users tend to get advantage of transparent 'cont'
context: start 'cont' output in function A: A()->pr_cont(), continue it in
B: A()->B()->pr_cont(), and then in C: A()->B()->C()->pr_cont(), and
finally flush in A: A()->pr_cont(\n). And then some paths have the
early_printk requirement. We can break the 'transparent cont' by passing
buffer pointers around [it can get a bit hairy; looking at lockdep patch],
but early_printk requirement is a different beast.

So in my email I was not advertising printk_safe as a "buffered printk for
everyone", I was just talking about lockdep. It's a bit doubtful that Peter
will ACK lockdep transition to buffered printk.

	-ss
Tetsuo Handa Nov. 9, 2018, 9:54 a.m. UTC | #14
On 2018/11/08 0:19, Petr Mladek wrote:
> I really hope that the maze of pr_cont() calls in lockdep.c is the most
> complicated one that we would meet.
> 
> Anyway, the following comes to my mind:
> 
> 1. The mixing of normal and buffered printk calls is a bit confusing
>    and error prone. It would make sense to use the buffered printk
>    everywhere in the given section of code even when it is not
>    strictly needed.

Here is a draft version for how the code would look like...

 include/linux/printk.h   |  16 ++
 kernel/locking/lockdep.c | 711 ++++++++++++++++++++++++++---------------------
 2 files changed, 404 insertions(+), 323 deletions(-)

diff --git a/include/linux/printk.h b/include/linux/printk.h
index cf3eccf..ff4f66c 100644
--- a/include/linux/printk.h
+++ b/include/linux/printk.h
@@ -530,4 +530,20 @@ static inline void print_hex_dump_debug(const char *prefix_str, int prefix_type,
 }
 #endif
 
+struct printk_buffer;
+struct printk_buffer *get_printk_buffer(void);
+void put_printk_buffer(struct printk_buffer *buf);
+__printf(2, 3)
+int bprintk(struct printk_buffer *buf, const char *fmt, ...);
+
+#define bpr_info(buf, fmt, ...)				\
+	bprintk(buf, KERN_INFO pr_fmt(fmt), ##__VA_ARGS__)
+#define bpr_warning(buf, fmt, ...)				\
+	bprintk(buf, KERN_WARNING pr_fmt(fmt), ##__VA_ARGS__)
+#define bpr_warn bpr_warning
+#define bpr_err(buf, fmt, ...)				\
+	bprintk(buf, KERN_ERR pr_fmt(fmt), ##__VA_ARGS__)
+#define bpr_cont(buf, fmt, ...)			\
+	bprintk(buf, KERN_CONT fmt, ##__VA_ARGS__)
+
 #endif
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 1efada2..22b85aa 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -493,7 +493,7 @@ void get_usage_chars(struct lock_class *class, char usage[LOCK_USAGE_CHARS])
 	usage[i] = '\0';
 }
 
-static void __print_lock_name(struct lock_class *class)
+static void __print_lock_name(struct printk_buffer *buf, struct lock_class *class)
 {
 	char str[KSYM_NAME_LEN];
 	const char *name;
@@ -501,28 +501,28 @@ static void __print_lock_name(struct lock_class *class)
 	name = class->name;
 	if (!name) {
 		name = __get_key_name(class->key, str);
-		printk(KERN_CONT "%s", name);
+		bprintk(buf, KERN_CONT "%s", name);
 	} else {
-		printk(KERN_CONT "%s", name);
+		bprintk(buf, KERN_CONT "%s", name);
 		if (class->name_version > 1)
-			printk(KERN_CONT "#%d", class->name_version);
+			bprintk(buf, KERN_CONT "#%d", class->name_version);
 		if (class->subclass)
-			printk(KERN_CONT "/%d", class->subclass);
+			bprintk(buf, KERN_CONT "/%d", class->subclass);
 	}
 }
 
-static void print_lock_name(struct lock_class *class)
+static void print_lock_name(struct printk_buffer *buf, struct lock_class *class)
 {
 	char usage[LOCK_USAGE_CHARS];
 
 	get_usage_chars(class, usage);
 
-	printk(KERN_CONT " (");
-	__print_lock_name(class);
-	printk(KERN_CONT "){%s}", usage);
+	bprintk(buf, KERN_CONT " (");
+	__print_lock_name(buf, class);
+	bprintk(buf, KERN_CONT "){%s}", usage);
 }
 
-static void print_lockdep_cache(struct lockdep_map *lock)
+static void print_lockdep_cache(struct printk_buffer *buf, struct lockdep_map *lock)
 {
 	const char *name;
 	char str[KSYM_NAME_LEN];
@@ -531,10 +531,10 @@ static void print_lockdep_cache(struct lockdep_map *lock)
 	if (!name)
 		name = __get_key_name(lock->key->subkeys, str);
 
-	printk(KERN_CONT "%s", name);
+	bprintk(buf, KERN_CONT "%s", name);
 }
 
-static void print_lock(struct held_lock *hlock)
+static void print_lock(struct printk_buffer *buf, struct held_lock *hlock)
 {
 	/*
 	 * We can be called locklessly through debug_show_all_locks() so be
@@ -546,23 +546,23 @@ static void print_lock(struct held_lock *hlock)
 	barrier();
 
 	if (!class_idx || (class_idx - 1) >= MAX_LOCKDEP_KEYS) {
-		printk(KERN_CONT "<RELEASED>\n");
+		bprintk(buf, KERN_CONT "<RELEASED>\n");
 		return;
 	}
 
-	printk(KERN_CONT "%p", hlock->instance);
-	print_lock_name(lock_classes + class_idx - 1);
-	printk(KERN_CONT ", at: %pS\n", (void *)hlock->acquire_ip);
+	bprintk(buf, KERN_CONT "%p", hlock->instance);
+	print_lock_name(buf, lock_classes + class_idx - 1);
+	bprintk(buf, KERN_CONT ", at: %pS\n", (void *)hlock->acquire_ip);
 }
 
-static void lockdep_print_held_locks(struct task_struct *p)
+static void lockdep_print_held_locks(struct printk_buffer *buf, struct task_struct *p)
 {
 	int i, depth = READ_ONCE(p->lockdep_depth);
 
 	if (!depth)
-		printk("no locks held by %s/%d.\n", p->comm, task_pid_nr(p));
+		bprintk(buf, "no locks held by %s/%d.\n", p->comm, task_pid_nr(p));
 	else
-		printk("%d lock%s held by %s/%d:\n", depth,
+		bprintk(buf, "%d lock%s held by %s/%d:\n", depth,
 		       depth > 1 ? "s" : "", p->comm, task_pid_nr(p));
 	/*
 	 * It's not reliable to print a task's held locks if it's not sleeping
@@ -571,14 +571,14 @@ static void lockdep_print_held_locks(struct task_struct *p)
 	if (p->state == TASK_RUNNING && p != current)
 		return;
 	for (i = 0; i < depth; i++) {
-		printk(" #%d: ", i);
-		print_lock(p->held_locks + i);
+		bprintk(buf, " #%d: ", i);
+		print_lock(buf, p->held_locks + i);
 	}
 }
 
-static void print_kernel_ident(void)
+static void print_kernel_ident(struct printk_buffer *buf)
 {
-	printk("%s %.*s %s\n", init_utsname()->release,
+	bprintk(buf, "%s %.*s %s\n", init_utsname()->release,
 		(int)strcspn(init_utsname()->version, " "),
 		init_utsname()->version,
 		print_tainted());
@@ -804,12 +804,14 @@ static bool assign_lock_key(struct lockdep_map *lock)
 	list_add_tail_rcu(&class->lock_entry, &all_lock_classes);
 
 	if (verbose(class)) {
+		struct printk_buffer *buf = get_printk_buffer();
 		graph_unlock();
 
-		printk("\nnew class %px: %s", class->key, class->name);
+		bprintk(buf, "\nnew class %px: %s", class->key, class->name);
 		if (class->name_version > 1)
-			printk(KERN_CONT "#%d", class->name_version);
-		printk(KERN_CONT "\n");
+			bprintk(buf, KERN_CONT "#%d", class->name_version);
+		bprintk(buf, KERN_CONT "\n");
+		put_printk_buffer(buf);
 		dump_stack();
 
 		if (!graph_lock()) {
@@ -1081,20 +1083,20 @@ static inline int __bfs_backwards(struct lock_list *src_entry,
  * has been detected):
  */
 static noinline int
-print_circular_bug_entry(struct lock_list *target, int depth)
+print_circular_bug_entry(struct printk_buffer *buf, struct lock_list *target, int depth)
 {
 	if (debug_locks_silent)
 		return 0;
-	printk("\n-> #%u", depth);
-	print_lock_name(target->class);
-	printk(KERN_CONT ":\n");
+	bprintk(buf, "\n-> #%u", depth);
+	print_lock_name(buf, target->class);
+	bprintk(buf, KERN_CONT ":\n");
 	print_stack_trace(&target->trace, 6);
 
 	return 0;
 }
 
 static void
-print_circular_lock_scenario(struct held_lock *src,
+print_circular_lock_scenario(struct printk_buffer *buf, struct held_lock *src,
 			     struct held_lock *tgt,
 			     struct lock_list *prt)
 {
@@ -1116,31 +1118,31 @@ static inline int __bfs_backwards(struct lock_list *src_entry,
 	 * from the safe_class lock to the unsafe_class lock.
 	 */
 	if (parent != source) {
-		printk("Chain exists of:\n  ");
-		__print_lock_name(source);
-		printk(KERN_CONT " --> ");
-		__print_lock_name(parent);
-		printk(KERN_CONT " --> ");
-		__print_lock_name(target);
-		printk(KERN_CONT "\n\n");
+		bprintk(buf, "Chain exists of:\n  ");
+		__print_lock_name(buf, source);
+		bprintk(buf, KERN_CONT " --> ");
+		__print_lock_name(buf, parent);
+		bprintk(buf, KERN_CONT " --> ");
+		__print_lock_name(buf, target);
+		bprintk(buf, KERN_CONT "\n\n");
 	}
 
-	printk(" Possible unsafe locking scenario:\n\n");
-	printk("       CPU0                    CPU1\n");
-	printk("       ----                    ----\n");
-	printk("  lock(");
-	__print_lock_name(target);
-	printk(KERN_CONT ");\n");
-	printk("                               lock(");
-	__print_lock_name(parent);
-	printk(KERN_CONT ");\n");
-	printk("                               lock(");
-	__print_lock_name(target);
-	printk(KERN_CONT ");\n");
-	printk("  lock(");
-	__print_lock_name(source);
-	printk(KERN_CONT ");\n");
-	printk("\n *** DEADLOCK ***\n\n");
+	bprintk(buf, " Possible unsafe locking scenario:\n\n");
+	bprintk(buf, "       CPU0                    CPU1\n");
+	bprintk(buf, "       ----                    ----\n");
+	bprintk(buf, "  lock(");
+	__print_lock_name(buf, target);
+	bprintk(buf, KERN_CONT ");\n");
+	bprintk(buf, "                               lock(");
+	__print_lock_name(buf, parent);
+	bprintk(buf, KERN_CONT ");\n");
+	bprintk(buf, "                               lock(");
+	__print_lock_name(buf, target);
+	bprintk(buf, KERN_CONT ");\n");
+	bprintk(buf, "  lock(");
+	__print_lock_name(buf, source);
+	bprintk(buf, KERN_CONT ");\n");
+	bprintk(buf, "\n *** DEADLOCK ***\n\n");
 }
 
 /*
@@ -1148,7 +1150,7 @@ static inline int __bfs_backwards(struct lock_list *src_entry,
  * header first:
  */
 static noinline int
-print_circular_bug_header(struct lock_list *entry, unsigned int depth,
+print_circular_bug_header(struct printk_buffer *buf, struct lock_list *entry, unsigned int depth,
 			struct held_lock *check_src,
 			struct held_lock *check_tgt)
 {
@@ -1157,22 +1159,22 @@ static inline int __bfs_backwards(struct lock_list *src_entry,
 	if (debug_locks_silent)
 		return 0;
 
-	pr_warn("\n");
-	pr_warn("======================================================\n");
-	pr_warn("WARNING: possible circular locking dependency detected\n");
-	print_kernel_ident();
-	pr_warn("------------------------------------------------------\n");
-	pr_warn("%s/%d is trying to acquire lock:\n",
+	bpr_warn(buf, "\n");
+	bpr_warn(buf, "======================================================\n");
+	bpr_warn(buf, "WARNING: possible circular locking dependency detected\n");
+	print_kernel_ident(buf);
+	bpr_warn(buf, "------------------------------------------------------\n");
+	bpr_warn(buf, "%s/%d is trying to acquire lock:\n",
 		curr->comm, task_pid_nr(curr));
-	print_lock(check_src);
+	print_lock(buf, check_src);
 
-	pr_warn("\nbut task is already holding lock:\n");
+	bpr_warn(buf, "\nbut task is already holding lock:\n");
 
-	print_lock(check_tgt);
-	pr_warn("\nwhich lock already depends on the new lock.\n\n");
-	pr_warn("\nthe existing dependency chain (in reverse order) is:\n");
+	print_lock(buf, check_tgt);
+	bpr_warn(buf, "\nwhich lock already depends on the new lock.\n\n");
+	bpr_warn(buf, "\nthe existing dependency chain (in reverse order) is:\n");
 
-	print_circular_bug_entry(entry, depth);
+	print_circular_bug_entry(buf, entry, depth);
 
 	return 0;
 }
@@ -1188,6 +1190,7 @@ static noinline int print_circular_bug(struct lock_list *this,
 				struct held_lock *check_tgt,
 				struct stack_trace *trace)
 {
+	struct printk_buffer *buf;
 	struct task_struct *curr = current;
 	struct lock_list *parent;
 	struct lock_list *first_parent;
@@ -1199,25 +1202,27 @@ static noinline int print_circular_bug(struct lock_list *this,
 	if (!save_trace(&this->trace))
 		return 0;
 
+	buf = get_printk_buffer();
 	depth = get_lock_depth(target);
 
-	print_circular_bug_header(target, depth, check_src, check_tgt);
+	print_circular_bug_header(buf, target, depth, check_src, check_tgt);
 
 	parent = get_lock_parent(target);
 	first_parent = parent;
 
 	while (parent) {
-		print_circular_bug_entry(parent, --depth);
+		print_circular_bug_entry(buf, parent, --depth);
 		parent = get_lock_parent(parent);
 	}
 
-	printk("\nother info that might help us debug this:\n\n");
-	print_circular_lock_scenario(check_src, check_tgt,
+	bprintk(buf, "\nother info that might help us debug this:\n\n");
+	print_circular_lock_scenario(buf, check_src, check_tgt,
 				     first_parent);
 
-	lockdep_print_held_locks(curr);
+	lockdep_print_held_locks(buf, curr);
 
-	printk("\nstack backtrace:\n");
+	bprintk(buf, "\nstack backtrace:\n");
+	put_printk_buffer(buf);
 	dump_stack();
 
 	return 0;
@@ -1385,29 +1390,29 @@ static inline int usage_match(struct lock_list *entry, void *bit)
 	return result;
 }
 
-static void print_lock_class_header(struct lock_class *class, int depth)
+static void print_lock_class_header(struct printk_buffer *buf, struct lock_class *class, int depth)
 {
 	int bit;
 
-	printk("%*s->", depth, "");
-	print_lock_name(class);
+	bprintk(buf, "%*s->", depth, "");
+	print_lock_name(buf, class);
 #ifdef CONFIG_DEBUG_LOCKDEP
-	printk(KERN_CONT " ops: %lu", debug_class_ops_read(class));
+	bprintk(buf, KERN_CONT " ops: %lu", debug_class_ops_read(class));
 #endif
-	printk(KERN_CONT " {\n");
+	bprintk(buf, KERN_CONT " {\n");
 
 	for (bit = 0; bit < LOCK_USAGE_STATES; bit++) {
 		if (class->usage_mask & (1 << bit)) {
 			int len = depth;
 
-			len += printk("%*s   %s", depth, "", usage_str[bit]);
-			len += printk(KERN_CONT " at:\n");
+			len += bprintk(buf, "%*s   %s", depth, "", usage_str[bit]);
+			len += bprintk(buf, KERN_CONT " at:\n");
 			print_stack_trace(class->usage_traces + bit, len);
 		}
 	}
-	printk("%*s }\n", depth, "");
+	bprintk(buf, "%*s }\n", depth, "");
 
-	printk("%*s ... key      at: [<%px>] %pS\n",
+	bprintk(buf, "%*s ... key      at: [<%px>] %pS\n",
 		depth, "", class->key, class->key);
 }
 
@@ -1415,7 +1420,7 @@ static void print_lock_class_header(struct lock_class *class, int depth)
  * printk the shortest lock dependencies from @start to @end in reverse order:
  */
 static void __used
-print_shortest_lock_dependencies(struct lock_list *leaf,
+print_shortest_lock_dependencies(struct printk_buffer *buf, struct lock_list *leaf,
 				struct lock_list *root)
 {
 	struct lock_list *entry = leaf;
@@ -1425,13 +1430,13 @@ static void print_lock_class_header(struct lock_class *class, int depth)
 	depth = get_lock_depth(leaf);
 
 	do {
-		print_lock_class_header(entry->class, depth);
-		printk("%*s ... acquired at:\n", depth, "");
+		print_lock_class_header(buf, entry->class, depth);
+		bprintk(buf, "%*s ... acquired at:\n", depth, "");
 		print_stack_trace(&entry->trace, 2);
-		printk("\n");
+		bprintk(buf, "\n");
 
 		if (depth == 0 && (entry != root)) {
-			printk("lockdep:%s bad path found in chain graph\n", __func__);
+			bprintk(buf, "lockdep:%s bad path found in chain graph\n", __func__);
 			break;
 		}
 
@@ -1443,7 +1448,7 @@ static void print_lock_class_header(struct lock_class *class, int depth)
 }
 
 static void
-print_irq_lock_scenario(struct lock_list *safe_entry,
+print_irq_lock_scenario(struct printk_buffer *buf, struct lock_list *safe_entry,
 			struct lock_list *unsafe_entry,
 			struct lock_class *prev_class,
 			struct lock_class *next_class)
@@ -1469,33 +1474,33 @@ static void print_lock_class_header(struct lock_class *class, int depth)
 	 * from the safe_class lock to the unsafe_class lock.
 	 */
 	if (middle_class != unsafe_class) {
-		printk("Chain exists of:\n  ");
-		__print_lock_name(safe_class);
-		printk(KERN_CONT " --> ");
-		__print_lock_name(middle_class);
-		printk(KERN_CONT " --> ");
-		__print_lock_name(unsafe_class);
-		printk(KERN_CONT "\n\n");
+		bprintk(buf, "Chain exists of:\n  ");
+		__print_lock_name(buf, safe_class);
+		bprintk(buf, KERN_CONT " --> ");
+		__print_lock_name(buf, middle_class);
+		bprintk(buf, KERN_CONT " --> ");
+		__print_lock_name(buf, unsafe_class);
+		bprintk(buf, KERN_CONT "\n\n");
 	}
 
-	printk(" Possible interrupt unsafe locking scenario:\n\n");
-	printk("       CPU0                    CPU1\n");
-	printk("       ----                    ----\n");
-	printk("  lock(");
-	__print_lock_name(unsafe_class);
-	printk(KERN_CONT ");\n");
-	printk("                               local_irq_disable();\n");
-	printk("                               lock(");
-	__print_lock_name(safe_class);
-	printk(KERN_CONT ");\n");
-	printk("                               lock(");
-	__print_lock_name(middle_class);
-	printk(KERN_CONT ");\n");
-	printk("  <Interrupt>\n");
-	printk("    lock(");
-	__print_lock_name(safe_class);
-	printk(KERN_CONT ");\n");
-	printk("\n *** DEADLOCK ***\n\n");
+	bprintk(buf, " Possible interrupt unsafe locking scenario:\n\n");
+	bprintk(buf, "       CPU0                    CPU1\n");
+	bprintk(buf, "       ----                    ----\n");
+	bprintk(buf, "  lock(");
+	__print_lock_name(buf, unsafe_class);
+	bprintk(buf, KERN_CONT ");\n");
+	bprintk(buf, "                               local_irq_disable();\n");
+	bprintk(buf, "                               lock(");
+	__print_lock_name(buf, safe_class);
+	bprintk(buf, KERN_CONT ");\n");
+	bprintk(buf, "                               lock(");
+	__print_lock_name(buf, middle_class);
+	bprintk(buf, KERN_CONT ");\n");
+	bprintk(buf, "  <Interrupt>\n");
+	bprintk(buf, "    lock(");
+	__print_lock_name(buf, safe_class);
+	bprintk(buf, KERN_CONT ");\n");
+	bprintk(buf, "\n *** DEADLOCK ***\n\n");
 }
 
 static int
@@ -1510,65 +1515,69 @@ static void print_lock_class_header(struct lock_class *class, int depth)
 			 enum lock_usage_bit bit2,
 			 const char *irqclass)
 {
+	struct printk_buffer *buf;
+
 	if (!debug_locks_off_graph_unlock() || debug_locks_silent)
 		return 0;
 
-	pr_warn("\n");
-	pr_warn("=====================================================\n");
-	pr_warn("WARNING: %s-safe -> %s-unsafe lock order detected\n",
+	buf = get_printk_buffer();
+	bpr_warn(buf, "\n");
+	bpr_warn(buf, "=====================================================\n");
+	bpr_warn(buf, "WARNING: %s-safe -> %s-unsafe lock order detected\n",
 		irqclass, irqclass);
-	print_kernel_ident();
-	pr_warn("-----------------------------------------------------\n");
-	pr_warn("%s/%d [HC%u[%lu]:SC%u[%lu]:HE%u:SE%u] is trying to acquire:\n",
+	print_kernel_ident(buf);
+	bpr_warn(buf, "-----------------------------------------------------\n");
+	bpr_warn(buf, "%s/%d [HC%u[%lu]:SC%u[%lu]:HE%u:SE%u] is trying to acquire:\n",
 		curr->comm, task_pid_nr(curr),
 		curr->hardirq_context, hardirq_count() >> HARDIRQ_SHIFT,
 		curr->softirq_context, softirq_count() >> SOFTIRQ_SHIFT,
 		curr->hardirqs_enabled,
 		curr->softirqs_enabled);
-	print_lock(next);
+	print_lock(buf, next);
 
-	pr_warn("\nand this task is already holding:\n");
-	print_lock(prev);
-	pr_warn("which would create a new lock dependency:\n");
-	print_lock_name(hlock_class(prev));
-	pr_cont(" ->");
-	print_lock_name(hlock_class(next));
-	pr_cont("\n");
+	bpr_warn(buf, "\nand this task is already holding:\n");
+	print_lock(buf, prev);
+	bpr_warn(buf, "which would create a new lock dependency:\n");
+	print_lock_name(buf, hlock_class(prev));
+	bpr_cont(buf, " ->");
+	print_lock_name(buf, hlock_class(next));
+	bpr_cont(buf, "\n");
 
-	pr_warn("\nbut this new dependency connects a %s-irq-safe lock:\n",
+	bpr_warn(buf, "\nbut this new dependency connects a %s-irq-safe lock:\n",
 		irqclass);
-	print_lock_name(backwards_entry->class);
-	pr_warn("\n... which became %s-irq-safe at:\n", irqclass);
+	print_lock_name(buf, backwards_entry->class);
+	bpr_warn(buf, "\n... which became %s-irq-safe at:\n", irqclass);
 
 	print_stack_trace(backwards_entry->class->usage_traces + bit1, 1);
 
-	pr_warn("\nto a %s-irq-unsafe lock:\n", irqclass);
-	print_lock_name(forwards_entry->class);
-	pr_warn("\n... which became %s-irq-unsafe at:\n", irqclass);
-	pr_warn("...");
+	bpr_warn(buf, "\nto a %s-irq-unsafe lock:\n", irqclass);
+	print_lock_name(buf, forwards_entry->class);
+	bpr_warn(buf, "\n... which became %s-irq-unsafe at:\n", irqclass);
+	bpr_warn(buf, "...");
 
 	print_stack_trace(forwards_entry->class->usage_traces + bit2, 1);
 
-	pr_warn("\nother info that might help us debug this:\n\n");
-	print_irq_lock_scenario(backwards_entry, forwards_entry,
+	bpr_warn(buf, "\nother info that might help us debug this:\n\n");
+	print_irq_lock_scenario(buf, backwards_entry, forwards_entry,
 				hlock_class(prev), hlock_class(next));
 
-	lockdep_print_held_locks(curr);
+	lockdep_print_held_locks(buf, curr);
 
-	pr_warn("\nthe dependencies between %s-irq-safe lock and the holding lock:\n", irqclass);
+	bpr_warn(buf, "\nthe dependencies between %s-irq-safe lock and the holding lock:\n", irqclass);
 	if (!save_trace(&prev_root->trace))
-		return 0;
-	print_shortest_lock_dependencies(backwards_entry, prev_root);
+		goto done;
+	print_shortest_lock_dependencies(buf, backwards_entry, prev_root);
 
-	pr_warn("\nthe dependencies between the lock to be acquired");
-	pr_warn(" and %s-irq-unsafe lock:\n", irqclass);
+	bpr_warn(buf, "\nthe dependencies between the lock to be acquired");
+	bpr_warn(buf, " and %s-irq-unsafe lock:\n", irqclass);
 	if (!save_trace(&next_root->trace))
-		return 0;
-	print_shortest_lock_dependencies(forwards_entry, next_root);
+		goto done;
+	print_shortest_lock_dependencies(buf, forwards_entry, next_root);
 
-	pr_warn("\nstack backtrace:\n");
+	bpr_warn(buf, "\nstack backtrace:\n");
 	dump_stack();
-
+ done:
+	put_printk_buffer(buf);
 	return 0;
 }
 
@@ -1716,48 +1725,52 @@ static inline void inc_chains(void)
 #endif
 
 static void
-print_deadlock_scenario(struct held_lock *nxt,
+print_deadlock_scenario(struct printk_buffer *buf, struct held_lock *nxt,
 			     struct held_lock *prv)
 {
 	struct lock_class *next = hlock_class(nxt);
 	struct lock_class *prev = hlock_class(prv);
 
-	printk(" Possible unsafe locking scenario:\n\n");
-	printk("       CPU0\n");
-	printk("       ----\n");
-	printk("  lock(");
-	__print_lock_name(prev);
-	printk(KERN_CONT ");\n");
-	printk("  lock(");
-	__print_lock_name(next);
-	printk(KERN_CONT ");\n");
-	printk("\n *** DEADLOCK ***\n\n");
-	printk(" May be due to missing lock nesting notation\n\n");
+	bprintk(buf, " Possible unsafe locking scenario:\n\n");
+	bprintk(buf, "       CPU0\n");
+	bprintk(buf, "       ----\n");
+	bprintk(buf, "  lock(");
+	__print_lock_name(buf, prev);
+	bprintk(buf, KERN_CONT ");\n");
+	bprintk(buf, "  lock(");
+	__print_lock_name(buf, next);
+	bprintk(buf, KERN_CONT ");\n");
+	bprintk(buf, "\n *** DEADLOCK ***\n\n");
+	bprintk(buf, " May be due to missing lock nesting notation\n\n");
 }
 
 static int
 print_deadlock_bug(struct task_struct *curr, struct held_lock *prev,
 		   struct held_lock *next)
 {
+	struct printk_buffer *buf;
+
 	if (!debug_locks_off_graph_unlock() || debug_locks_silent)
 		return 0;
 
-	pr_warn("\n");
-	pr_warn("============================================\n");
-	pr_warn("WARNING: possible recursive locking detected\n");
-	print_kernel_ident();
-	pr_warn("--------------------------------------------\n");
-	pr_warn("%s/%d is trying to acquire lock:\n",
+	buf = get_printk_buffer();
+	bpr_warn(buf, "\n");
+	bpr_warn(buf, "============================================\n");
+	bpr_warn(buf, "WARNING: possible recursive locking detected\n");
+	print_kernel_ident(buf);
+	bpr_warn(buf, "--------------------------------------------\n");
+	bpr_warn(buf, "%s/%d is trying to acquire lock:\n",
 		curr->comm, task_pid_nr(curr));
-	print_lock(next);
-	pr_warn("\nbut task is already holding lock:\n");
-	print_lock(prev);
+	print_lock(buf, next);
+	bpr_warn(buf, "\nbut task is already holding lock:\n");
+	print_lock(buf, prev);
 
-	pr_warn("\nother info that might help us debug this:\n");
-	print_deadlock_scenario(next, prev);
-	lockdep_print_held_locks(curr);
+	bpr_warn(buf, "\nother info that might help us debug this:\n");
+	print_deadlock_scenario(buf, next, prev);
+	lockdep_print_held_locks(buf, curr);
 
-	pr_warn("\nstack backtrace:\n");
+	bpr_warn(buf, "\nstack backtrace:\n");
+	put_printk_buffer(buf);
 	dump_stack();
 
 	return 0;
@@ -2048,49 +2061,49 @@ static inline int get_first_held_lock(struct task_struct *curr,
 /*
  * Returns the next chain_key iteration
  */
-static u64 print_chain_key_iteration(int class_idx, u64 chain_key)
+static u64 print_chain_key_iteration(struct printk_buffer *buf, int class_idx, u64 chain_key)
 {
 	u64 new_chain_key = iterate_chain_key(chain_key, class_idx);
 
-	printk(" class_idx:%d -> chain_key:%016Lx",
+	bprintk(buf, " class_idx:%d -> chain_key:%016Lx",
 		class_idx,
 		(unsigned long long)new_chain_key);
 	return new_chain_key;
 }
 
 static void
-print_chain_keys_held_locks(struct task_struct *curr, struct held_lock *hlock_next)
+print_chain_keys_held_locks(struct printk_buffer *buf, struct task_struct *curr, struct held_lock *hlock_next)
 {
 	struct held_lock *hlock;
 	u64 chain_key = 0;
 	int depth = curr->lockdep_depth;
 	int i;
 
-	printk("depth: %u\n", depth + 1);
+	bprintk(buf, "depth: %u\n", depth + 1);
 	for (i = get_first_held_lock(curr, hlock_next); i < depth; i++) {
 		hlock = curr->held_locks + i;
-		chain_key = print_chain_key_iteration(hlock->class_idx, chain_key);
+		chain_key = print_chain_key_iteration(buf, hlock->class_idx, chain_key);
 
-		print_lock(hlock);
+		print_lock(buf, hlock);
 	}
 
-	print_chain_key_iteration(hlock_next->class_idx, chain_key);
-	print_lock(hlock_next);
+	print_chain_key_iteration(buf, hlock_next->class_idx, chain_key);
+	print_lock(buf, hlock_next);
 }
 
-static void print_chain_keys_chain(struct lock_chain *chain)
+static void print_chain_keys_chain(struct printk_buffer *buf, struct lock_chain *chain)
 {
 	int i;
 	u64 chain_key = 0;
 	int class_id;
 
-	printk("depth: %u\n", chain->depth);
+	bprintk(buf, "depth: %u\n", chain->depth);
 	for (i = 0; i < chain->depth; i++) {
 		class_id = chain_hlocks[chain->base + i];
-		chain_key = print_chain_key_iteration(class_id + 1, chain_key);
+		chain_key = print_chain_key_iteration(buf, class_id + 1, chain_key);
 
-		print_lock_name(lock_classes + class_id);
-		printk("\n");
+		print_lock_name(buf, lock_classes + class_id);
+		bprintk(buf, "\n");
 	}
 }
 
@@ -2098,21 +2111,24 @@ static void print_collision(struct task_struct *curr,
 			struct held_lock *hlock_next,
 			struct lock_chain *chain)
 {
-	pr_warn("\n");
-	pr_warn("============================\n");
-	pr_warn("WARNING: chain_key collision\n");
-	print_kernel_ident();
-	pr_warn("----------------------------\n");
-	pr_warn("%s/%d: ", current->comm, task_pid_nr(current));
-	pr_warn("Hash chain already cached but the contents don't match!\n");
+	struct printk_buffer *buf = get_printk_buffer();
 
-	pr_warn("Held locks:");
-	print_chain_keys_held_locks(curr, hlock_next);
+	bpr_warn(buf, "\n");
+	bpr_warn(buf, "============================\n");
+	bpr_warn(buf, "WARNING: chain_key collision\n");
+	print_kernel_ident(buf);
+	bpr_warn(buf, "----------------------------\n");
+	bpr_warn(buf, "%s/%d: ", current->comm, task_pid_nr(current));
+	bpr_warn(buf, "Hash chain already cached but the contents don't match!\n");
 
-	pr_warn("Locks in cached chain:");
-	print_chain_keys_chain(chain);
+	bpr_warn(buf, "Held locks:");
+	print_chain_keys_held_locks(buf, curr, hlock_next);
 
-	pr_warn("\nstack backtrace:\n");
+	bpr_warn(buf, "Locks in cached chain:");
+	print_chain_keys_chain(buf, chain);
+
+	bpr_warn(buf, "\nstack backtrace:\n");
+	put_printk_buffer(buf);
 	dump_stack();
 }
 #endif
@@ -2418,57 +2434,61 @@ static void check_chain_key(struct task_struct *curr)
 }
 
 static void
-print_usage_bug_scenario(struct held_lock *lock)
+print_usage_bug_scenario(struct printk_buffer *buf, struct held_lock *lock)
 {
 	struct lock_class *class = hlock_class(lock);
 
-	printk(" Possible unsafe locking scenario:\n\n");
-	printk("       CPU0\n");
-	printk("       ----\n");
-	printk("  lock(");
-	__print_lock_name(class);
-	printk(KERN_CONT ");\n");
-	printk("  <Interrupt>\n");
-	printk("    lock(");
-	__print_lock_name(class);
-	printk(KERN_CONT ");\n");
-	printk("\n *** DEADLOCK ***\n\n");
+	bprintk(buf, " Possible unsafe locking scenario:\n\n");
+	bprintk(buf, "       CPU0\n");
+	bprintk(buf, "       ----\n");
+	bprintk(buf, "  lock(");
+	__print_lock_name(buf, class);
+	bprintk(buf, KERN_CONT ");\n");
+	bprintk(buf, "  <Interrupt>\n");
+	bprintk(buf, "    lock(");
+	__print_lock_name(buf, class);
+	bprintk(buf, KERN_CONT ");\n");
+	bprintk(buf, "\n *** DEADLOCK ***\n\n");
 }
 
 static int
 print_usage_bug(struct task_struct *curr, struct held_lock *this,
 		enum lock_usage_bit prev_bit, enum lock_usage_bit new_bit)
 {
+	struct printk_buffer *buf;
+
 	if (!debug_locks_off_graph_unlock() || debug_locks_silent)
 		return 0;
 
-	pr_warn("\n");
-	pr_warn("================================\n");
-	pr_warn("WARNING: inconsistent lock state\n");
-	print_kernel_ident();
-	pr_warn("--------------------------------\n");
+	buf = get_printk_buffer();
+	bpr_warn(buf, "\n");
+	bpr_warn(buf, "================================\n");
+	bpr_warn(buf, "WARNING: inconsistent lock state\n");
+	print_kernel_ident(buf);
+	bpr_warn(buf, "--------------------------------\n");
 
-	pr_warn("inconsistent {%s} -> {%s} usage.\n",
+	bpr_warn(buf, "inconsistent {%s} -> {%s} usage.\n",
 		usage_str[prev_bit], usage_str[new_bit]);
 
-	pr_warn("%s/%d [HC%u[%lu]:SC%u[%lu]:HE%u:SE%u] takes:\n",
+	bpr_warn(buf, "%s/%d [HC%u[%lu]:SC%u[%lu]:HE%u:SE%u] takes:\n",
 		curr->comm, task_pid_nr(curr),
 		trace_hardirq_context(curr), hardirq_count() >> HARDIRQ_SHIFT,
 		trace_softirq_context(curr), softirq_count() >> SOFTIRQ_SHIFT,
 		trace_hardirqs_enabled(curr),
 		trace_softirqs_enabled(curr));
-	print_lock(this);
+	print_lock(buf, this);
 
-	pr_warn("{%s} state was registered at:\n", usage_str[prev_bit]);
+	bpr_warn(buf, "{%s} state was registered at:\n", usage_str[prev_bit]);
 	print_stack_trace(hlock_class(this)->usage_traces + prev_bit, 1);
 
 	print_irqtrace_events(curr);
-	pr_warn("\nother info that might help us debug this:\n");
-	print_usage_bug_scenario(this);
+	bpr_warn(buf, "\nother info that might help us debug this:\n");
+	print_usage_bug_scenario(buf, this);
 
-	lockdep_print_held_locks(curr);
+	lockdep_print_held_locks(buf, curr);
 
-	pr_warn("\nstack backtrace:\n");
+	bpr_warn(buf, "\nstack backtrace:\n");
+	put_printk_buffer(buf);
 	dump_stack();
 
 	return 0;
@@ -2500,6 +2520,7 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
 			struct held_lock *this, int forwards,
 			const char *irqclass)
 {
+	struct printk_buffer *buf;
 	struct lock_list *entry = other;
 	struct lock_list *middle = NULL;
 	int depth;
@@ -2507,28 +2528,29 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
 	if (!debug_locks_off_graph_unlock() || debug_locks_silent)
 		return 0;
 
-	pr_warn("\n");
-	pr_warn("========================================================\n");
-	pr_warn("WARNING: possible irq lock inversion dependency detected\n");
-	print_kernel_ident();
-	pr_warn("--------------------------------------------------------\n");
-	pr_warn("%s/%d just changed the state of lock:\n",
+	buf = get_printk_buffer();
+	bpr_warn(buf, "\n");
+	bpr_warn(buf, "========================================================\n");
+	bpr_warn(buf, "WARNING: possible irq lock inversion dependency detected\n");
+	print_kernel_ident(buf);
+	bpr_warn(buf, "--------------------------------------------------------\n");
+	bpr_warn(buf, "%s/%d just changed the state of lock:\n",
 		curr->comm, task_pid_nr(curr));
-	print_lock(this);
+	print_lock(buf, this);
 	if (forwards)
-		pr_warn("but this lock took another, %s-unsafe lock in the past:\n", irqclass);
+		bpr_warn(buf, "but this lock took another, %s-unsafe lock in the past:\n", irqclass);
 	else
-		pr_warn("but this lock was taken by another, %s-safe lock in the past:\n", irqclass);
-	print_lock_name(other->class);
-	pr_warn("\n\nand interrupts could create inverse lock ordering between them.\n\n");
+		bpr_warn(buf, "but this lock was taken by another, %s-safe lock in the past:\n", irqclass);
+	print_lock_name(buf, other->class);
+	bpr_warn(buf, "\n\nand interrupts could create inverse lock ordering between them.\n\n");
 
-	pr_warn("\nother info that might help us debug this:\n");
+	bpr_warn(buf, "\nother info that might help us debug this:\n");
 
 	/* Find a middle lock (if one exists) */
 	depth = get_lock_depth(other);
 	do {
 		if (depth == 0 && (entry != root)) {
-			pr_warn("lockdep:%s bad path found in chain graph\n", __func__);
+			bpr_warn(buf, "lockdep:%s bad path found in chain graph\n", __func__);
 			break;
 		}
 		middle = entry;
@@ -2536,20 +2558,21 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
 		depth--;
 	} while (entry && entry != root && (depth >= 0));
 	if (forwards)
-		print_irq_lock_scenario(root, other,
+		print_irq_lock_scenario(buf, root, other,
 			middle ? middle->class : root->class, other->class);
 	else
-		print_irq_lock_scenario(other, root,
+		print_irq_lock_scenario(buf, other, root,
 			middle ? middle->class : other->class, root->class);
 
-	lockdep_print_held_locks(curr);
+	lockdep_print_held_locks(buf, curr);
 
-	pr_warn("\nthe shortest dependencies between 2nd lock and 1st lock:\n");
+	bpr_warn(buf, "\nthe shortest dependencies between 2nd lock and 1st lock:\n");
 	if (!save_trace(&root->trace))
 		return 0;
-	print_shortest_lock_dependencies(other, root);
+	print_shortest_lock_dependencies(buf, other, root);
 
-	pr_warn("\nstack backtrace:\n");
+	bpr_warn(buf, "\nstack backtrace:\n");
+	put_printk_buffer(buf);
 	dump_stack();
 
 	return 0;
@@ -3076,8 +3099,11 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
 	 * We must printk outside of the graph_lock:
 	 */
 	if (ret == 2) {
-		printk("\nmarked lock as {%s}:\n", usage_str[new_bit]);
-		print_lock(this);
+		struct printk_buffer *buf = get_printk_buffer();
+
+		bprintk(buf, "\nmarked lock as {%s}:\n", usage_str[new_bit]);
+		print_lock(buf, this);
+		put_printk_buffer(buf);
 		print_irqtrace_events(curr);
 		dump_stack();
 	}
@@ -3160,30 +3186,34 @@ void lockdep_init_map(struct lockdep_map *lock, const char *name,
 				struct held_lock *hlock,
 				unsigned long ip)
 {
+	struct printk_buffer *buf;
+
 	if (!debug_locks_off())
 		return 0;
 	if (debug_locks_silent)
 		return 0;
 
-	pr_warn("\n");
-	pr_warn("==================================\n");
-	pr_warn("WARNING: Nested lock was not taken\n");
-	print_kernel_ident();
-	pr_warn("----------------------------------\n");
+	buf = get_printk_buffer();
+	bpr_warn(buf, "\n");
+	bpr_warn(buf, "==================================\n");
+	bpr_warn(buf, "WARNING: Nested lock was not taken\n");
+	print_kernel_ident(buf);
+	bpr_warn(buf, "----------------------------------\n");
 
-	pr_warn("%s/%d is trying to lock:\n", curr->comm, task_pid_nr(curr));
-	print_lock(hlock);
+	bpr_warn(buf, "%s/%d is trying to lock:\n", curr->comm, task_pid_nr(curr));
+	print_lock(buf, hlock);
 
-	pr_warn("\nbut this task is not holding:\n");
-	pr_warn("%s\n", hlock->nest_lock->name);
+	bpr_warn(buf, "\nbut this task is not holding:\n");
+	bpr_warn(buf, "%s\n", hlock->nest_lock->name);
 
-	pr_warn("\nstack backtrace:\n");
+	bpr_warn(buf, "\nstack backtrace:\n");
 	dump_stack();
 
-	pr_warn("\nother info that might help us debug this:\n");
-	lockdep_print_held_locks(curr);
+	bpr_warn(buf, "\nother info that might help us debug this:\n");
+	lockdep_print_held_locks(buf, curr);
 
-	pr_warn("\nstack backtrace:\n");
+	bpr_warn(buf, "\nstack backtrace:\n");
+	put_printk_buffer(buf);
 	dump_stack();
 
 	return 0;
@@ -3232,10 +3262,13 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 	debug_class_ops_inc(class);
 
 	if (very_verbose(class)) {
-		printk("\nacquire class [%px] %s", class->key, class->name);
+		struct printk_buffer *buf = get_printk_buffer();
+
+		bprintk(buf, "\nacquire class [%px] %s", class->key, class->name);
 		if (class->name_version > 1)
-			printk(KERN_CONT "#%d", class->name_version);
-		printk(KERN_CONT "\n");
+			bprintk(buf, KERN_CONT "#%d", class->name_version);
+		bprintk(buf, KERN_CONT "\n");
+		put_printk_buffer(buf);
 		dump_stack();
 	}
 
@@ -3349,12 +3382,15 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 		return 0;
 #endif
 	if (unlikely(curr->lockdep_depth >= MAX_LOCK_DEPTH)) {
+		struct printk_buffer *buf = get_printk_buffer();
+
 		debug_locks_off();
 		print_lockdep_off("BUG: MAX_LOCK_DEPTH too low!");
-		printk(KERN_DEBUG "depth: %i  max: %lu!\n",
+		bprintk(buf, KERN_DEBUG "depth: %i  max: %lu!\n",
 		       curr->lockdep_depth, MAX_LOCK_DEPTH);
 
-		lockdep_print_held_locks(current);
+		lockdep_print_held_locks(buf, current);
+		put_printk_buffer(buf);
 		debug_show_all_locks();
 		dump_stack();
 
@@ -3371,26 +3407,30 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 print_unlock_imbalance_bug(struct task_struct *curr, struct lockdep_map *lock,
 			   unsigned long ip)
 {
+	struct printk_buffer *buf;
+
 	if (!debug_locks_off())
 		return 0;
 	if (debug_locks_silent)
 		return 0;
 
-	pr_warn("\n");
-	pr_warn("=====================================\n");
-	pr_warn("WARNING: bad unlock balance detected!\n");
-	print_kernel_ident();
-	pr_warn("-------------------------------------\n");
-	pr_warn("%s/%d is trying to release lock (",
+	buf = get_printk_buffer();
+	bpr_warn(buf, "\n");
+	bpr_warn(buf, "=====================================\n");
+	bpr_warn(buf, "WARNING: bad unlock balance detected!\n");
+	print_kernel_ident(buf);
+	bpr_warn(buf, "-------------------------------------\n");
+	bpr_warn(buf, "%s/%d is trying to release lock (",
 		curr->comm, task_pid_nr(curr));
-	print_lockdep_cache(lock);
-	pr_cont(") at:\n");
+	print_lockdep_cache(buf, lock);
+	bpr_cont(buf, ") at:\n");
 	print_ip_sym(ip);
-	pr_warn("but there are no more locks to release!\n");
-	pr_warn("\nother info that might help us debug this:\n");
-	lockdep_print_held_locks(curr);
+	bpr_warn(buf, "but there are no more locks to release!\n");
+	bpr_warn(buf, "\nother info that might help us debug this:\n");
+	lockdep_print_held_locks(buf, curr);
 
-	pr_warn("\nstack backtrace:\n");
+	bpr_warn(buf, "\nstack backtrace:\n");
+	put_printk_buffer(buf);
 	dump_stack();
 
 	return 0;
@@ -3946,26 +3986,30 @@ void lock_unpin_lock(struct lockdep_map *lock, struct pin_cookie cookie)
 print_lock_contention_bug(struct task_struct *curr, struct lockdep_map *lock,
 			   unsigned long ip)
 {
+	struct printk_buffer *buf;
+
 	if (!debug_locks_off())
 		return 0;
 	if (debug_locks_silent)
 		return 0;
 
-	pr_warn("\n");
-	pr_warn("=================================\n");
-	pr_warn("WARNING: bad contention detected!\n");
-	print_kernel_ident();
-	pr_warn("---------------------------------\n");
-	pr_warn("%s/%d is trying to contend lock (",
+	buf = get_printk_buffer();
+	bpr_warn(buf, "\n");
+	bpr_warn(buf, "=================================\n");
+	bpr_warn(buf, "WARNING: bad contention detected!\n");
+	print_kernel_ident(buf);
+	bpr_warn(buf, "---------------------------------\n");
+	bpr_warn(buf, "%s/%d is trying to contend lock (",
 		curr->comm, task_pid_nr(curr));
-	print_lockdep_cache(lock);
-	pr_cont(") at:\n");
+	print_lockdep_cache(buf, lock);
+	bpr_cont(buf, ") at:\n");
 	print_ip_sym(ip);
-	pr_warn("but there are no locks held!\n");
-	pr_warn("\nother info that might help us debug this:\n");
-	lockdep_print_held_locks(curr);
+	bpr_warn(buf, "but there are no locks held!\n");
+	bpr_warn(buf, "\nother info that might help us debug this:\n");
+	lockdep_print_held_locks(buf, curr);
 
-	pr_warn("\nstack backtrace:\n");
+	bpr_warn(buf, "\nstack backtrace:\n");
+	put_printk_buffer(buf);
 	dump_stack();
 
 	return 0;
@@ -4288,22 +4332,26 @@ void __init lockdep_init(void)
 print_freed_lock_bug(struct task_struct *curr, const void *mem_from,
 		     const void *mem_to, struct held_lock *hlock)
 {
+	struct printk_buffer *buf;
+
 	if (!debug_locks_off())
 		return;
 	if (debug_locks_silent)
 		return;
 
-	pr_warn("\n");
-	pr_warn("=========================\n");
-	pr_warn("WARNING: held lock freed!\n");
-	print_kernel_ident();
-	pr_warn("-------------------------\n");
-	pr_warn("%s/%d is freeing memory %px-%px, with a lock still held there!\n",
+	buf = get_printk_buffer();
+	bpr_warn(buf, "\n");
+	bpr_warn(buf, "=========================\n");
+	bpr_warn(buf, "WARNING: held lock freed!\n");
+	print_kernel_ident(buf);
+	bpr_warn(buf, "-------------------------\n");
+	bpr_warn(buf, "%s/%d is freeing memory %px-%px, with a lock still held there!\n",
 		curr->comm, task_pid_nr(curr), mem_from, mem_to-1);
-	print_lock(hlock);
-	lockdep_print_held_locks(curr);
+	print_lock(buf, hlock);
+	lockdep_print_held_locks(buf, curr);
 
-	pr_warn("\nstack backtrace:\n");
+	bpr_warn(buf, "\nstack backtrace:\n");
+	put_printk_buffer(buf);
 	dump_stack();
 }
 
@@ -4346,19 +4394,23 @@ void debug_check_no_locks_freed(const void *mem_from, unsigned long mem_len)
 
 static void print_held_locks_bug(void)
 {
+	struct printk_buffer *buf;
+
 	if (!debug_locks_off())
 		return;
 	if (debug_locks_silent)
 		return;
 
-	pr_warn("\n");
-	pr_warn("====================================\n");
-	pr_warn("WARNING: %s/%d still has locks held!\n",
+	buf = get_printk_buffer();
+	bpr_warn(buf, "\n");
+	bpr_warn(buf, "====================================\n");
+	bpr_warn(buf, "WARNING: %s/%d still has locks held!\n",
 	       current->comm, task_pid_nr(current));
-	print_kernel_ident();
-	pr_warn("------------------------------------\n");
-	lockdep_print_held_locks(current);
-	pr_warn("\nstack backtrace:\n");
+	print_kernel_ident(buf);
+	bpr_warn(buf, "------------------------------------\n");
+	lockdep_print_held_locks(buf, current);
+	bpr_warn(buf, "\nstack backtrace:\n");
+	put_printk_buffer(buf);
 	dump_stack();
 }
 
@@ -4372,26 +4424,29 @@ void debug_check_no_locks_held(void)
 #ifdef __KERNEL__
 void debug_show_all_locks(void)
 {
+	struct printk_buffer *buf;
 	struct task_struct *g, *p;
 
 	if (unlikely(!debug_locks)) {
 		pr_warn("INFO: lockdep is turned off.\n");
 		return;
 	}
-	pr_warn("\nShowing all locks held in the system:\n");
+	buf = get_printk_buffer();
+	bpr_warn(buf, "\nShowing all locks held in the system:\n");
 
 	rcu_read_lock();
 	for_each_process_thread(g, p) {
 		if (!p->lockdep_depth)
 			continue;
-		lockdep_print_held_locks(p);
+		lockdep_print_held_locks(buf, p);
 		touch_nmi_watchdog();
 		touch_all_softlockup_watchdogs();
 	}
 	rcu_read_unlock();
 
-	pr_warn("\n");
-	pr_warn("=============================================\n\n");
+	bpr_warn(buf, "\n");
+	bpr_warn(buf, "=============================================\n\n");
+	put_printk_buffer(buf);
 }
 EXPORT_SYMBOL_GPL(debug_show_all_locks);
 #endif
@@ -4402,11 +4457,15 @@ void debug_show_all_locks(void)
  */
 void debug_show_held_locks(struct task_struct *task)
 {
+	struct printk_buffer *buf;
+
 	if (unlikely(!debug_locks)) {
 		printk("INFO: lockdep is turned off.\n");
 		return;
 	}
-	lockdep_print_held_locks(task);
+	buf = get_printk_buffer();
+	lockdep_print_held_locks(buf, task);
+	put_printk_buffer(buf);
 }
 EXPORT_SYMBOL_GPL(debug_show_held_locks);
 
@@ -4415,16 +4474,20 @@ asmlinkage __visible void lockdep_sys_exit(void)
 	struct task_struct *curr = current;
 
 	if (unlikely(curr->lockdep_depth)) {
+		struct printk_buffer *buf;
+
 		if (!debug_locks_off())
 			return;
-		pr_warn("\n");
-		pr_warn("================================================\n");
-		pr_warn("WARNING: lock held when returning to user space!\n");
-		print_kernel_ident();
-		pr_warn("------------------------------------------------\n");
-		pr_warn("%s/%d is leaving the kernel with locks still held!\n",
+		buf = get_printk_buffer();
+		bpr_warn(buf, "\n");
+		bpr_warn(buf, "================================================\n");
+		bpr_warn(buf, "WARNING: lock held when returning to user space!\n");
+		print_kernel_ident(buf);
+		bpr_warn(buf, "------------------------------------------------\n");
+		bpr_warn(buf, "%s/%d is leaving the kernel with locks still held!\n",
 				curr->comm, curr->pid);
-		lockdep_print_held_locks(curr);
+		lockdep_print_held_locks(buf, curr);
+		put_printk_buffer(buf);
 	}
 
 	/*
@@ -4436,17 +4499,18 @@ asmlinkage __visible void lockdep_sys_exit(void)
 
 void lockdep_rcu_suspicious(const char *file, const int line, const char *s)
 {
+	struct printk_buffer *buf = get_printk_buffer();
 	struct task_struct *curr = current;
 
 	/* Note: the following can be executed concurrently, so be careful. */
-	pr_warn("\n");
-	pr_warn("=============================\n");
-	pr_warn("WARNING: suspicious RCU usage\n");
-	print_kernel_ident();
-	pr_warn("-----------------------------\n");
-	pr_warn("%s:%d %s!\n", file, line, s);
-	pr_warn("\nother info that might help us debug this:\n\n");
-	pr_warn("\n%srcu_scheduler_active = %d, debug_locks = %d\n",
+	bpr_warn(buf, "\n");
+	bpr_warn(buf, "=============================\n");
+	bpr_warn(buf, "WARNING: suspicious RCU usage\n");
+	print_kernel_ident(buf);
+	bpr_warn(buf, "-----------------------------\n");
+	bpr_warn(buf, "%s:%d %s!\n", file, line, s);
+	bpr_warn(buf, "\nother info that might help us debug this:\n\n");
+	bpr_warn(buf, "\n%srcu_scheduler_active = %d, debug_locks = %d\n",
 	       !rcu_lockdep_current_cpu_online()
 			? "RCU used illegally from offline CPU!\n"
 			: !rcu_is_watching()
@@ -4473,10 +4537,11 @@ void lockdep_rcu_suspicious(const char *file, const int line, const char *s)
 	 * rcu_read_lock_bh() and so on from extended quiescent states.
 	 */
 	if (!rcu_is_watching())
-		pr_warn("RCU used illegally from extended quiescent state!\n");
+		bpr_warn(buf, "RCU used illegally from extended quiescent state!\n");
 
-	lockdep_print_held_locks(curr);
-	pr_warn("\nstack backtrace:\n");
+	lockdep_print_held_locks(buf, curr);
+	bpr_warn(buf, "\nstack backtrace:\n");
+	put_printk_buffer(buf);
 	dump_stack();
 }
 EXPORT_SYMBOL_GPL(lockdep_rcu_suspicious);
Tetsuo Handa Nov. 9, 2018, 9:55 a.m. UTC | #15
On 2018/11/09 15:12, Sergey Senozhatsky wrote:
> On (11/08/18 20:37), Tetsuo Handa wrote:
> > On 2018/11/08 13:45, Sergey Senozhatsky wrote:
> > > So, can we just do the following? /* a sketch */
> > > 
> > > lockdep.c
> > > 	printk_safe_enter_irqsave(flags);
> > > 	lockdep_report();
> > > 	printk_safe_exit_irqrestore(flags);
> > 
> > If buffer size were large enough to hold messages from out_of_memory(),
> > I would like to use it for out_of_memory() because delaying SIGKILL
> > due to waiting for printk() to complete is not good. Surely we can't
> > hold all messages because amount from dump_tasks() is unpredictable.
> > Maybe we can hold all messages from dump_header() except dump_tasks().
> > 
> > But isn't it essentially same with
> > http://lkml.kernel.org/r/1493560477-3016-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp
> > which Linus does not want?
> 
> Dunno. I guess we still haven't heard from Linus because he did quite a good
> job setting up his 'email filters' ;)
> 
> Converting the existing users to buffered printk is not so simple.

Yes. We will need to ask for huge modifications for rare events.

> Apparently there are different paths; some can afford buffered printk, some
> cannot. Some of 'cont' users tend to get advantage of transparent 'cont'
> context: start 'cont' output in function A: A()->pr_cont(), continue it in
> B: A()->B()->pr_cont(), and then in C: A()->B()->C()->pr_cont(), and
> finally flush in A: A()->pr_cont(\n).

Yes, both OOM reporting and lockdep reporting have such dependency.

Linus said

  It might actually be a good idea to help those things, by making
  helper functions available that do the marshalling.

  So not calling "printk()" directly, but having a set of simple
  "buffer_print()" functions where each user has its own buffer, and
  then the "buffer_print()" functions will help people do nicely output
  data.

in https://lore.kernel.org/lkml/CA+55aFyzR4LKhJKLFgvvd9OTsos2_g4-9fova782BX4kyA3bLA@mail.gmail.com/ .

But since scattering "struct printk_buffer *" around as a function argument
is not realistic, scattering "a data structure which Linus suggested (i.e.
no printk() from helper functions)" around as a function argument will not be
realistic as well. We will need to calculate appropriate buffer size which
traverses many functions, in addition to passing that data structure as
a function argument.

>                                       And then some paths have the
> early_printk requirement. We can break the 'transparent cont' by passing
> buffer pointers around [it can get a bit hairy; looking at lockdep patch],
> but early_printk requirement is a different beast.

How early_printk requirement affects line buffered printk() API?

I don't think it is impossible to convert from

     printk("Testing feature XYZ..");
     this_may_blow_up_because_of_hw_bugs();
     printk(KERN_CONT " ... ok\n");

to

     printk("Testing feature XYZ:\n");
     this_may_blow_up_because_of_hw_bugs();
     printk("Testing feature XYZ.. ... ok\n");

in https://lore.kernel.org/lkml/CA+55aFwmwdY_mMqdEyFPpRhCKRyeqj=+aCqe5nN108v8ELFvPw@mail.gmail.com/ .

> 
> So in my email I was not advertising printk_safe as a "buffered printk for
> everyone", I was just talking about lockdep. It's a bit doubtful that Peter
> will ACK lockdep transition to buffered printk.

What Linus is suggesting is "helper functions with no printk()", which is
more difficult than my "helper functions with printk()" because we need to
calculate enough buffer size (which may traverse many functions) because
we can't printk() from helper functions when buffer size is too small.

I'm wondering if Linus still insists scattering "a data structure which
Linus suggested" around. Rather, I'd like to propose below one. This is not
perfect but modification is much smaller.

 kernel/printk/printk.c | 166 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 166 insertions(+)

diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index 1b2a029..c339594 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -1950,6 +1950,168 @@ asmlinkage int printk_emit(int facility, int level,
 }
 EXPORT_SYMBOL(printk_emit);
 
+static void buffered_printk_emit(const char *fmt, ...)
+{
+	va_list args;
+
+	va_start(args, fmt);
+	vprintk_emit(0, LOGLEVEL_DEFAULT, NULL, 0, fmt, args);
+	va_end(args);
+}
+
+static int get_completed_lines(char *buf, int pos)
+{
+	while (pos > 0) {
+		if (buf[--pos] != '\n')
+			continue;
+		buf[pos++] = '\0';
+		return pos;
+	}
+	return 0;
+}
+
+/* A structure for line-buffered printk() output. */
+struct printk_buffer {
+	/*
+	 * State of this line buffer.
+	 *
+	 * 0: Nobody has reserved, and is not using now.
+	 *
+	 * 1 to NR_CPUS: An interrupt context reserved, but is not using now.
+	 * NR_CPUS+1 to ULONG_MAX-1: Some task context reserved, but is not
+	 *                           using now.
+	 *
+	 * ULONG_MAX: Somebody has reserved, and is using now.
+	 *            This value is used as a lock.
+	 */
+	unsigned long context;
+	/* Valid bytes in buf[]. */
+	unsigned short int len;
+	char buf[LOG_LINE_MAX];
+	/* Last accessed jiffies. */
+	unsigned long stamp;
+} __aligned(1024);
+
+/* Number of line buffers. */
+#define NUM_LINE_BUFFERS 16
+
+/*
+ * Line buffered printk() tries to assign a buffer when printk() from a new
+ * context identifier comes in. And it automatically releases that buffer when
+ * one of three conditions listed below became true.
+ *
+ *   (1) printk() from that context identifier emitted '\n' as the last
+ *       character of output.
+ *   (2) printk() from that context identifier tried to print a too long line
+ *       which cannot be stored into a buffer.
+ *   (3) printk() from a new context identifier noticed that some context
+ *       identifier is reserving a buffer for more than 10 seconds without
+ *       emitting '\n'.
+ *
+ * Since (3) is based on a heuristic that somebody forgot to emit '\n' as the
+ * last character of output(), pr_cont()/KERN_CONT users are expected to emit
+ * '\n' within 10 seconds even if they reserved a buffer.
+ *
+ * There is a limitation that a buffer cannot be released if an oops occurred
+ * while that buffer is locked (context == ULONG_MAX).
+ *
+ * The granularity of context is a bit sparse. If printk() was called from task
+ * context, the address of current thread is used as context identifier. If
+ * printk() was called from non-task context, current processor ID is used as
+ * context identifier.
+ */
+static int try_buffered_printk(const char *fmt, va_list args)
+{
+	static struct printk_buffer printk_buffers[NUM_LINE_BUFFERS];
+	va_list tmp_args;
+	int pos;
+	int r;
+	const unsigned long context = in_task() ?
+		(unsigned long) current : raw_smp_processor_id() + 1;
+	struct printk_buffer *ptr;
+	const unsigned long now = jiffies;
+
+	/* Check if this context can lock a reserved buffer. */
+	for (pos = 0; pos < NUM_LINE_BUFFERS; pos++) {
+		ptr = &printk_buffers[pos];
+		if (context == ptr->context && context ==
+		    cmpxchg_acquire(&ptr->context, context, ULONG_MAX))
+			goto found;
+	}
+	/* Check if somebody is reserving a buffer for too long. */
+	for (pos = 0; pos < NUM_LINE_BUFFERS; pos++) {
+		unsigned long tmp;
+
+		ptr = &printk_buffers[pos];
+		if (likely(!ptr->len ||
+			   !time_after(now, ptr->stamp + 10 * HZ)))
+			continue;
+		tmp = ptr->context;
+		if (tmp == 0 || tmp == ULONG_MAX ||
+		    cmpxchg_acquire(&ptr->context, tmp, ULONG_MAX) != tmp)
+			continue;
+		if (ptr->len && time_after(now, ptr->stamp + 10 * HZ)) {
+			ptr->buf[ptr->len] = '\0';
+			buffered_printk_emit("%s\n", ptr->buf);
+			ptr->len = 0;
+		}
+		xchg_release(&ptr->context, 0);
+	}
+	/* Check if this context can reserve and lock a buffer. */
+	for (pos = 0; pos < NUM_LINE_BUFFERS; pos++) {
+		ptr = &printk_buffers[pos];
+		if (ptr->context == 0 &&
+		    cmpxchg_acquire(&ptr->context, 0, ULONG_MAX) == 0)
+			goto found;
+	}
+	/* All buffers are reserved. Fallback to normal printk(). */
+	return -1;
+ found:
+	ptr->stamp = now;
+	while (true) {
+		pos = ptr->len && (printk_get_level(fmt) == 'c') ? 2 : 0;
+		va_copy(tmp_args, args);
+		r = vsnprintf(ptr->buf + ptr->len, sizeof(ptr->buf) - ptr->len,
+			      fmt + pos, tmp_args);
+		va_end(tmp_args);
+		if (likely(r + ptr->len < sizeof(ptr->buf)))
+			break;
+		/* Oops. Line was too long to store into this buffer. */
+		if (!ptr->len) {
+			/*
+			 * We can do nothing. Fallback to normal printk(). But
+			 * be prepared for this printk() being the last call
+			 * from this context.
+			 */
+			r = -1;
+			goto out;
+		}
+		/* Retry after flushing incomplete line in this buffer. */
+		ptr->buf[ptr->len] = '\0';
+		buffered_printk_emit("%s", ptr->buf);
+		ptr->len = 0;
+	}
+	ptr->len += r;
+	/* Flush already completed lines. */
+	pos = get_completed_lines(ptr->buf, ptr->len);
+	if (pos) {
+		buffered_printk_emit("%s\n", ptr->buf);
+		ptr->len -= pos;
+		memmove(ptr->buf, ptr->buf + pos, ptr->len);
+	}
+ out:
+	/*
+	 * Release this buffer if this buffer became empty because this
+	 * printk() might be the last call from this context. Otherwise, keep
+	 * this buffer for future printk() calls from this context.
+	 */
+	if (!ptr->len)
+		xchg_release(&ptr->context, 0);
+	else
+		xchg_release(&ptr->context, context);
+	return r;
+}
+
 int vprintk_default(const char *fmt, va_list args)
 {
 	int r;
@@ -1961,6 +2123,10 @@ int vprintk_default(const char *fmt, va_list args)
 		return r;
 	}
 #endif
+	r = try_buffered_printk(fmt, args);
+	if (r >= 0)
+		return r;
+
 	r = vprintk_emit(0, LOGLEVEL_DEFAULT, NULL, 0, fmt, args);
 
 	return r;
Linus Torvalds Nov. 9, 2018, 2:08 p.m. UTC | #16
On Fri, Nov 9, 2018 at 12:12 AM Sergey Senozhatsky
<sergey.senozhatsky.work@gmail.com> wrote:
>
> Dunno. I guess we still haven't heard from Linus because he did quite a good
> job setting up his 'email filters' ;)

Not filters, just long threads that I lurk on.

I don't actually care too much about this - the part I care about is
that when panics etc happen, things go out with a true best effort.

And "best effort" actually means "reality", not "theory". I don't care
one whit for some broken odd serial console that has a lock and
deadlocks if you get a panic just in the right place. I care about the
main printk/tty code doing the right thing, and avoiding the locks
with the scheduler and timers etc. So the timestamping and wakeup code
needing locks - or thinking you can delay things and print them out
later (when no later happens because you're panicing in an NMI) -
*that* is what I care deeply about.

Something like having a line buffering interface for random debugging
messages etc, I just don't get excited about. It just needs to be
simple enough and robust enough. You guys seem to be talking it out
ok.

             Linus
Steven Rostedt Nov. 9, 2018, 2:42 p.m. UTC | #17
On Fri, 9 Nov 2018 08:08:13 -0600
Linus Torvalds <torvalds@linux-foundation.org> wrote:

>  You guys seem to be talking it out
> ok.

Do your new filters not only remove words, but also add text?

  ;-)

-- Steve
Petr Mladek Nov. 9, 2018, 3:43 p.m. UTC | #18
On Fri 2018-11-09 18:55:26, Tetsuo Handa wrote:
> On 2018/11/09 15:12, Sergey Senozhatsky wrote:
> > On (11/08/18 20:37), Tetsuo Handa wrote:
> > > On 2018/11/08 13:45, Sergey Senozhatsky wrote:
> How early_printk requirement affects line buffered printk() API?
> 
> I don't think it is impossible to convert from
> 
>      printk("Testing feature XYZ..");
>      this_may_blow_up_because_of_hw_bugs();
>      printk(KERN_CONT " ... ok\n");
> 
> to
> 
>      printk("Testing feature XYZ:\n");
>      this_may_blow_up_because_of_hw_bugs();
>      printk("Testing feature XYZ.. ... ok\n");
> 
> in https://lore.kernel.org/lkml/CA+55aFwmwdY_mMqdEyFPpRhCKRyeqj=+aCqe5nN108v8ELFvPw@mail.gmail.com/ .

I just wonder how this pattern is common. I have tried but I failed
to find any instance.

This problem looks like a big argument against explicit buffers.
But I wonder if it is real.

> > 
> > So in my email I was not advertising printk_safe as a "buffered printk for
> > everyone", I was just talking about lockdep. It's a bit doubtful that Peter
> > will ACK lockdep transition to buffered printk.
> 
> What Linus is suggesting is "helper functions with no printk()", which is
> more difficult than my "helper functions with printk()" because we need to
> calculate enough buffer size (which may traverse many functions) because
> we can't printk() from helper functions when buffer size is too small.
> 
> I'm wondering if Linus still insists scattering "a data structure which
> Linus suggested" around. Rather, I'd like to propose below one. This is not
> perfect but modification is much smaller.
> 
> + * Line buffered printk() tries to assign a buffer when printk() from a new
> + * context identifier comes in. And it automatically releases that buffer when
> + * one of three conditions listed below became true.
> + *
> + *   (1) printk() from that context identifier emitted '\n' as the last
> + *       character of output.
> + *   (2) printk() from that context identifier tried to print a too long line
> + *       which cannot be stored into a buffer.
> + *   (3) printk() from a new context identifier noticed that some context
> + *       identifier is reserving a buffer for more than 10 seconds without
> + *       emitting '\n'.
> + *
> + * Since (3) is based on a heuristic that somebody forgot to emit '\n' as the
> + * last character of output(), pr_cont()/KERN_CONT users are expected to emit
> + * '\n' within 10 seconds even if they reserved a buffer.

This is my main concern about this approach. It is so easy to omit
the final '\n'.

They are currently delayed until another printk(). Even this is bad.
Unfortunately we could not setup timer from printk() because it
would add more locks into the game.

This patch will make it worse. They might get delayed by 10s or even
more. Many other lines might appear in between. Also the code is
more tricky[*].


Sign, I am really unhappy how the buffered_printk() conversion
looks in lockdep.c. But I still think that the approach is more
reliable. I am going to investigate much more pr_cont() users.
I wonder how many would be that complicated. I do not want
to give up just because one use-case that was complicated
even before.


[*] The buffer can get written and flushed by different processes.
    It is not trivial to do it correctly a lockless way.

    The proposed locking looks right on the first glance. But
    the code is a bit scary ;-)
Tetsuo Handa Nov. 10, 2018, 2:42 a.m. UTC | #19
On 2018/11/10 0:43, Petr Mladek wrote:
>> + * Line buffered printk() tries to assign a buffer when printk() from a new
>> + * context identifier comes in. And it automatically releases that buffer when
>> + * one of three conditions listed below became true.
>> + *
>> + *   (1) printk() from that context identifier emitted '\n' as the last
>> + *       character of output.
>> + *   (2) printk() from that context identifier tried to print a too long line
>> + *       which cannot be stored into a buffer.
>> + *   (3) printk() from a new context identifier noticed that some context
>> + *       identifier is reserving a buffer for more than 10 seconds without
>> + *       emitting '\n'.
>> + *
>> + * Since (3) is based on a heuristic that somebody forgot to emit '\n' as the
>> + * last character of output(), pr_cont()/KERN_CONT users are expected to emit
>> + * '\n' within 10 seconds even if they reserved a buffer.
> 
> This is my main concern about this approach. It is so easy to omit
> the final '\n'.

If it is so easy to forget the final '\n', there will be a lot of implicit
pr_cont() users (because pr_cont() assumes that previous printk() omitted the
final '\n'), and "I am going to investigate much more pr_cont() users." will
be insufficient for getting meaningful conclusion.

Checking "lack of the the final '\n'" means that we need to check
"all printk() users who are not emitting the final '\n'" and evaluate
"whether there is a possibility that subsequent printk() will not be
 called from that context due to e.g. conditional branches". That is an
impossible task for anybody, for there might be out-of-tree code doing it.

> 
> They are currently delayed until another printk(). Even this is bad.
> Unfortunately we could not setup timer from printk() because it
> would add more locks into the game.

We could use interval timer for flushing incomplete line.
But updating printk() users to always end with '\n' will be preferable.

> 
> This patch will make it worse. They might get delayed by 10s or even
> more. Many other lines might appear in between. Also the code is
> more tricky[*].

If there are a lot of implicit pr_cont() users (who are omitting
the final '\n' in previous printk()), we can try to find them (and
fix them) by "reporting the location of printk() which omitted the
final '\n'" instead of "flushing the partial line from different
processes" when "try_buffered_printk() (or an interval timer) found
that some buffer is holding a partial line for more than 10 seconds".

> 
> 
> Sign, I am really unhappy how the buffered_printk() conversion
> looks in lockdep.c. But I still think that the approach is more
> reliable. I am going to investigate much more pr_cont() users.

If there is a possibility that subsequent printk() will not be called from
that context due to e.g. conditional branches, we will need to flush after
previous printk() in order to eliminate possibility of failing to flush.

That is asking printk() users to fix previous printk() so that partial
line is always flushed (with '\n' or without '\n').

> I wonder how many would be that complicated. I do not want
> to give up just because one use-case that was complicated
> even before.
> 
> 
> [*] The buffer can get written and flushed by different processes.
>     It is not trivial to do it correctly a lockless way.
> 
>     The proposed locking looks right on the first glance. But
>     the code is a bit scary ;-)

try_buffered_printk() will be a good hook for finding implicit
pr_cont() users who are omitting the final '\n', for we can find them
without making changes to printk() users.

try_buffered_printk() hook can offload atomic-but-complicated task
(converting e.g. %pSOMETHING to %s) to outside of logbuf_lock, and
reduce the period of holding logbuf_lock.

Since try_buffered_printk() serves as a hook, if we ignore the offloading
atomic-but-complicated task, we can eliminate try_buffered_printk() after
we updated printk() users to always end with '\n'.
Tetsuo Handa Nov. 10, 2018, 8:52 a.m. UTC | #20
On 2018/11/10 0:43, Petr Mladek wrote:
> On Fri 2018-11-09 18:55:26, Tetsuo Handa wrote:
>> How early_printk requirement affects line buffered printk() API?
>>
>> I don't think it is impossible to convert from
>>
>>      printk("Testing feature XYZ..");
>>      this_may_blow_up_because_of_hw_bugs();
>>      printk(KERN_CONT " ... ok\n");
>>
>> to
>>
>>      printk("Testing feature XYZ:\n");
>>      this_may_blow_up_because_of_hw_bugs();
>>      printk("Testing feature XYZ.. ... ok\n");
>>
>> in https://lore.kernel.org/lkml/CA+55aFwmwdY_mMqdEyFPpRhCKRyeqj=+aCqe5nN108v8ELFvPw@mail.gmail.com/ .
> 
> I just wonder how this pattern is common. I have tried but I failed
> to find any instance.
> 
> This problem looks like a big argument against explicit buffers.
> But I wonder if it is real.

An example of boot up messages where buffering makes difference.

Vanilla:

[    0.260459] smp: Bringing up secondary CPUs ...
[    0.269595] x86: Booting SMP configuration:
[    0.270461] .... node  #0, CPUs:      #1
[    0.066578] Disabled fast string operations
[    0.066578] mce: CPU supports 0 MCE banks
[    0.066578] smpboot: CPU 1 Converting physical 2 to logical package 1
[    0.342569]  #2
[    0.066578] Disabled fast string operations
[    0.066578] mce: CPU supports 0 MCE banks
[    0.066578] smpboot: CPU 2 Converting physical 4 to logical package 2
[    0.413442]  #3
[    0.066578] Disabled fast string operations
[    0.066578] mce: CPU supports 0 MCE banks
[    0.066578] smpboot: CPU 3 Converting physical 6 to logical package 3
[    0.476562] smp: Brought up 1 node, 4 CPUs
[    0.477477] smpboot: Max logical packages: 8
[    0.477514] smpboot: Total of 4 processors activated (22691.70 BogoMIPS)

With try_buffered_printk() patch:

[    0.279768] smp: Bringing up secondary CPUs ...
[    0.288825] x86: Booting SMP configuration:
[    0.066748] Disabled fast string operations
[    0.066748] mce: CPU supports 0 MCE banks
[    0.066748] smpboot: CPU 1 Converting physical 2 to logical package 1
[    0.066748] Disabled fast string operations
[    0.066748] mce: CPU supports 0 MCE banks
[    0.066748] smpboot: CPU 2 Converting physical 4 to logical package 2
[    0.066748] Disabled fast string operations
[    0.066748] mce: CPU supports 0 MCE banks
[    0.066748] smpboot: CPU 3 Converting physical 6 to logical package 3
[    0.495862] .... node  #0, CPUs:      #1 #2 #36smp: Brought up 1 node, 4 CPUs
[    0.496833] smpboot: Max logical packages: 8
[    0.497609] smpboot: Total of 4 processors activated (22665.22 BogoMIPS)



Hmm, arch/x86/kernel/smpboot.c is not emitting '\n' after #num

        if (system_state < SYSTEM_RUNNING) {
                if (node != current_node) {
                        if (current_node > (-1))
                                pr_cont("\n");
                        current_node = node;

                        printk(KERN_INFO ".... node %*s#%d, CPUs:  ",
                               node_width - num_digits(node), " ", node);
                }

                /* Add padding for the BSP */
                if (cpu == 1)
                        pr_cont("%*s", width + 1, " ");

                pr_cont("%*s#%d", width - num_digits(cpu), " ", cpu);

        } else
                pr_info("Booting Node %d Processor %d APIC 0x%x\n",
                        node, cpu, apicid);

and causing

        pr_info("Brought up %d node%s, %d CPU%s\n",
                num_nodes, (num_nodes > 1 ? "s" : ""),
                num_cpus,  (num_cpus  > 1 ? "s" : ""));

line to be concatenated to previous line.
Maybe disable try_buffered_printk() if system_state != SYSTEM_RUNNING ?
Petr Mladek Nov. 23, 2018, 12:46 p.m. UTC | #21
On Sat 2018-11-10 11:42:03, Tetsuo Handa wrote:
> On 2018/11/10 0:43, Petr Mladek wrote:
> >> + * Line buffered printk() tries to assign a buffer when printk() from a new
> >> + * context identifier comes in. And it automatically releases that buffer when
> >> + * one of three conditions listed below became true.
> >> + *
> >> + *   (1) printk() from that context identifier emitted '\n' as the last
> >> + *       character of output.
> >> + *   (2) printk() from that context identifier tried to print a too long line
> >> + *       which cannot be stored into a buffer.
> >> + *   (3) printk() from a new context identifier noticed that some context
> >> + *       identifier is reserving a buffer for more than 10 seconds without
> >> + *       emitting '\n'.
> >> + *
> >> + * Since (3) is based on a heuristic that somebody forgot to emit '\n' as the
> >> + * last character of output(), pr_cont()/KERN_CONT users are expected to emit
> >> + * '\n' within 10 seconds even if they reserved a buffer.
> > 
> > This is my main concern about this approach. It is so easy to omit
> > the final '\n'.
> 
> If it is so easy to forget the final '\n', there will be a lot of implicit
> pr_cont() users (because pr_cont() assumes that previous printk() omitted the
> final '\n'), and "I am going to investigate much more pr_cont() users." will
> be insufficient for getting meaningful conclusion.
> 
> Checking "lack of the the final '\n'" means that we need to check
> "all printk() users who are not emitting the final '\n'" and evaluate
> "whether there is a possibility that subsequent printk() will not be
>  called from that context due to e.g. conditional branches". That is an
> impossible task for anybody, for there might be out-of-tree code doing it.
> > 
> > They are currently delayed until another printk(). Even this is bad.
> > Unfortunately we could not setup timer from printk() because it
> > would add more locks into the game.
> 
> We could use interval timer for flushing incomplete line.

I am more and more wondering if the buffered printk is worth
the effort. The more buffers we use the more we risk that nobody
would see some important message. Even a part of the line
might be crucial in some situations.

Steven told me on Plumbers conference that even few initial
characters saved him a day few times.


> But updating printk() users to always end with '\n' will be preferable.

This sounds like a whack a mole game. If I get it correctly, you write
that it is "an impossible task for anybody" just few lines above.

Best Regards,
Petr
Petr Mladek Nov. 23, 2018, 12:52 p.m. UTC | #22
On Sat 2018-11-10 17:52:17, Tetsuo Handa wrote:
> On 2018/11/10 0:43, Petr Mladek wrote:
> > On Fri 2018-11-09 18:55:26, Tetsuo Handa wrote:
> >> How early_printk requirement affects line buffered printk() API?
> >>
> >> I don't think it is impossible to convert from
> >>
> >>      printk("Testing feature XYZ..");
> >>      this_may_blow_up_because_of_hw_bugs();
> >>      printk(KERN_CONT " ... ok\n");
> >>
> >> to
> >>
> >>      printk("Testing feature XYZ:\n");
> >>      this_may_blow_up_because_of_hw_bugs();
> >>      printk("Testing feature XYZ.. ... ok\n");
> >>
> >> in https://lore.kernel.org/lkml/CA+55aFwmwdY_mMqdEyFPpRhCKRyeqj=+aCqe5nN108v8ELFvPw@mail.gmail.com/ .
> > 
> > I just wonder how this pattern is common. I have tried but I failed
> > to find any instance.
> > 
> > This problem looks like a big argument against explicit buffers.
> > But I wonder if it is real.
> 
> An example of boot up messages where buffering makes difference.
> 
> Vanilla:
> 
> [    0.260459] smp: Bringing up secondary CPUs ...
> [    0.269595] x86: Booting SMP configuration:
> [    0.270461] .... node  #0, CPUs:      #1
> [    0.066578] Disabled fast string operations
> [    0.066578] mce: CPU supports 0 MCE banks
> [    0.066578] smpboot: CPU 1 Converting physical 2 to logical package 1
> [    0.342569]  #2
> [    0.066578] Disabled fast string operations
> [    0.066578] mce: CPU supports 0 MCE banks
> [    0.066578] smpboot: CPU 2 Converting physical 4 to logical package 2
> [    0.413442]  #3
> [    0.066578] Disabled fast string operations
> [    0.066578] mce: CPU supports 0 MCE banks
> [    0.066578] smpboot: CPU 3 Converting physical 6 to logical package 3
> [    0.476562] smp: Brought up 1 node, 4 CPUs
> [    0.477477] smpboot: Max logical packages: 8
> [    0.477514] smpboot: Total of 4 processors activated (22691.70 BogoMIPS)
> 
> With try_buffered_printk() patch:
> 
> [    0.279768] smp: Bringing up secondary CPUs ...
> [    0.288825] x86: Booting SMP configuration:
> [    0.066748] Disabled fast string operations
> [    0.066748] mce: CPU supports 0 MCE banks
> [    0.066748] smpboot: CPU 1 Converting physical 2 to logical package 1
> [    0.066748] Disabled fast string operations
> [    0.066748] mce: CPU supports 0 MCE banks
> [    0.066748] smpboot: CPU 2 Converting physical 4 to logical package 2
> [    0.066748] Disabled fast string operations
> [    0.066748] mce: CPU supports 0 MCE banks
> [    0.066748] smpboot: CPU 3 Converting physical 6 to logical package 3
> [    0.495862] .... node  #0, CPUs:      #1 #2 #36smp: Brought up 1 node, 4 CPUs
> [    0.496833] smpboot: Max logical packages: 8
> [    0.497609] smpboot: Total of 4 processors activated (22665.22 BogoMIPS)
> 
> 
> 
> Hmm, arch/x86/kernel/smpboot.c is not emitting '\n' after #num
> 
>         if (system_state < SYSTEM_RUNNING) {
>                 if (node != current_node) {
>                         if (current_node > (-1))
>                                 pr_cont("\n");
>                         current_node = node;
> 
>                         printk(KERN_INFO ".... node %*s#%d, CPUs:  ",
>                                node_width - num_digits(node), " ", node);
>                 }
> 
>                 /* Add padding for the BSP */
>                 if (cpu == 1)
>                         pr_cont("%*s", width + 1, " ");
> 
>                 pr_cont("%*s#%d", width - num_digits(cpu), " ", cpu);
> 
>         } else
>                 pr_info("Booting Node %d Processor %d APIC 0x%x\n",
>                         node, cpu, apicid);
> 
> and causing
> 
>         pr_info("Brought up %d node%s, %d CPU%s\n",
>                 num_nodes, (num_nodes > 1 ? "s" : ""),
>                 num_cpus,  (num_cpus  > 1 ? "s" : ""));
> 
> line to be concatenated to previous line.
> Maybe disable try_buffered_printk() if system_state !=
> SYSTEM_RUNNING ?

We need to solve continuous lines also during boot. Also similar
problems might be in the code that is called in SYSTEM_RUNNING state.

This is yet another clue that try_buffered_printk() approach is not
that good idea.

Best Regards,
Petr
Tetsuo Handa Nov. 23, 2018, 1:12 p.m. UTC | #23
On 2018/11/23 21:46, Petr Mladek wrote:
> I am more and more wondering if the buffered printk is worth
> the effort. The more buffers we use the more we risk that nobody
> would see some important message. Even a part of the line
> might be crucial in some situations.
> 
> Steven told me on Plumbers conference that even few initial
> characters saved him a day few times.

Yes, few initial characters of one line might sometimes help.
But emitting one line at a time also helps sometimes; especially
when we need to interpret multiple lines and group these lines from
concurrent printk() callers.

> 
> 
>> But updating printk() users to always end with '\n' will be preferable.
> 
> This sounds like a whack a mole game. If I get it correctly, you write
> that it is "an impossible task for anybody" just few lines above.

Yes, updating printk() users is almost impossible. I think that something
like diff shown below is what we can afford at best. If you believe that
line buffering is wrong, I can tolerate with switching via kernel config
option like CONFIG_DEBUG_AID_FOR_SYZBOT=y.

 kernel/printk/printk.c | 263 +++++++++++++++++++++++++++++++++----------------
 1 file changed, 176 insertions(+), 87 deletions(-)

diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index 315718b..2276c99 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -584,6 +584,15 @@ static int log_store(int facility, int level,
 	struct printk_log *msg;
 	u32 size, pad_len;
 	u16 trunc_msg_len = 0;
+	static char id[32];
+	u16 id_len;
+
+	if (in_task())
+		id_len = scnprintf(id, sizeof(id), "[T%u] ", current->pid);
+	else
+		id_len = scnprintf(id, sizeof(id), "[C%u] ",
+				   raw_smp_processor_id());
+	text_len += id_len;
 
 	/* number of '\0' padding bytes to next message */
 	size = msg_used_size(text_len, dict_len, &pad_len);
@@ -607,12 +616,16 @@ static int log_store(int facility, int level,
 		log_next_idx = 0;
 	}
 
+	text_len -= id_len;
+
 	/* fill message */
 	msg = (struct printk_log *)(log_buf + log_next_idx);
-	memcpy(log_text(msg), text, text_len);
-	msg->text_len = text_len;
+	memcpy(log_text(msg), id, id_len);
+	memcpy(log_text(msg) + id_len, text, text_len);
+	msg->text_len = text_len + id_len;
 	if (trunc_msg_len) {
-		memcpy(log_text(msg) + text_len, trunc_msg, trunc_msg_len);
+		memcpy(log_text(msg) + id_len + text_len, trunc_msg,
+		       trunc_msg_len);
 		msg->text_len += trunc_msg_len;
 	}
 	memcpy(log_dict(msg), dict, dict_len);
@@ -620,10 +633,7 @@ static int log_store(int facility, int level,
 	msg->facility = facility;
 	msg->level = level & 7;
 	msg->flags = flags & 0x1f;
-	if (ts_nsec > 0)
-		msg->ts_nsec = ts_nsec;
-	else
-		msg->ts_nsec = local_clock();
+	msg->ts_nsec = ts_nsec;
 	memset(log_dict(msg) + dict_len, 0, pad_len);
 	msg->len = size;
 
@@ -1758,87 +1768,154 @@ static inline void printk_delay(void)
 }
 
 /*
- * Continuation lines are buffered, and not committed to the record buffer
- * until the line is complete, or a race forces it. The line fragments
- * though, are printed immediately to the consoles to ensure everything has
- * reached the console in case of a kernel crash.
+ * Continuation lines are buffered based on type of context, and not committed
+ * to the record buffer until the line is complete, or a race forces it.
+ *
+ * While this context calculation is not perfect, asking printk() callers to
+ * explicitly pass "struct printk_buffer" (except "context" field) as their
+ * function arguments will require too large tree-wide changes. Therefore,
+ * let's tolerate failing to separate "up to one line of messages", and avoid
+ * bloating kernel code size for addressing infrequently happening races.
  */
-static struct cont {
-	char buf[LOG_LINE_MAX];
-	size_t len;			/* length == 0 means unused buffer */
-	struct task_struct *owner;	/* task of first print*/
-	u64 ts_nsec;			/* time of first print */
-	u8 level;			/* log level of first message */
-	u8 facility;			/* log facility of first message */
-	enum log_flags flags;		/* prefix, newline flags */
-} cont;
-
-static void cont_flush(void)
+static inline unsigned long printk_context(void)
 {
-	if (cont.len == 0)
-		return;
-
-	log_store(cont.facility, cont.level, cont.flags, cont.ts_nsec,
-		  NULL, 0, cont.buf, cont.len);
-	cont.len = 0;
-}
+	unsigned long base;
 
-static bool cont_add(int facility, int level, enum log_flags flags, const char *text, size_t len)
-{
-	/* If the line gets too long, split it up in separate records. */
-	if (cont.len + len > sizeof(cont.buf)) {
-		cont_flush();
-		return false;
-	}
-
-	if (!cont.len) {
-		cont.facility = facility;
-		cont.level = level;
-		cont.owner = current;
-		cont.ts_nsec = local_clock();
-		cont.flags = flags;
-	}
-
-	memcpy(cont.buf + cont.len, text, len);
-	cont.len += len;
-
-	// The original flags come from the first line,
-	// but later continuations can add a newline.
-	if (flags & LOG_NEWLINE) {
-		cont.flags |= LOG_NEWLINE;
-		cont_flush();
-	}
-
-	return true;
+	if (in_task())
+		return (unsigned long) current;
+	if (in_nmi())
+		base = NR_CPUS * 2;
+	else if (in_irq())
+		base = NR_CPUS;
+	else
+		base = 0;
+	return base + raw_smp_processor_id();
 }
 
-static size_t log_output(int facility, int level, enum log_flags lflags, const char *dict, size_t dictlen, char *text, size_t text_len)
-{
+struct printk_buffer {
+	char buf[LOG_LINE_MAX];
+	/* valid bytes in buf[] */
+	size_t len;
 	/*
-	 * If an earlier line was buffered, and we're a continuation
-	 * write from the same process, try to add it to the buffer.
+	 * owner context of this buffer if len != 0
+	 *
+	 * 0 to NR_CPUS - 1: A soft IRQ context reserved.
+	 * NR_CPUS to NR_CPUS * 2 - 1: A hard IRQ context reserved.
+	 * NR_CPUS * 2 to NR_CPUS * 3 - 1: An NMI context reserved.
+	 * NR_CPUS * 3 to ULONG_MAX: Some task context reserved.
 	 */
-	if (cont.len) {
-		if (cont.owner == current && (lflags & LOG_CONT)) {
-			if (cont_add(facility, level, lflags, text, text_len))
-				return text_len;
+	unsigned long context;
+	/* last accessed jiffies */
+	unsigned long last_used;
+	/* time of first print */
+	u64 ts_nsec;
+	/* log flags of first message */
+	enum log_flags lflags;
+	/* log level of first message */
+	u8 level;
+} __aligned(1024);
+
+/* Number of line buffers. */
+#define NUM_LINE_BUFFERS 16
+
+/*
+ * Line buffered printk() tries to assign a buffer when printk() from a new
+ * context identifier comes in. And it automatically releases that buffer when
+ * one of three conditions listed below became true.
+ *
+ *   (1) printk() from that context identifier emitted '\n' as the last
+ *       character of output.
+ *   (2) printk() from that context identifier tried to print a too long line
+ *       which cannot be stored into a buffer.
+ *   (3) printk() from a new context identifier noticed that some context
+ *       identifier is reserving a buffer for more than 10 seconds without
+ *       emitting '\n'.
+ *
+ * Since (3) is based on a heuristic that somebody forgot to emit '\n' as the
+ * last character of output(), pr_cont()/KERN_CONT users are expected to emit
+ * '\n' within 10 seconds even if they reserved a buffer.
+ */
+static void log_output(const int facility, const int level,
+		       const enum log_flags lflags,
+		       const char *text, const size_t text_len)
+{
+	static struct printk_buffer printk_buffers[NUM_LINE_BUFFERS];
+	static struct printk_buffer static_buffer;
+	struct printk_buffer *ptr;
+	int i;
+	const unsigned long context = printk_context();
+	const unsigned long now = jiffies;
+
+	/* Only kernel-generated messages are subjected to buffering. */
+	if (facility) {
+		ptr = &static_buffer;
+		goto found;
+	}
+	/* Check if this context already reserved a buffer. */
+	for (i = 0; i < NUM_LINE_BUFFERS; i++) {
+		ptr = &printk_buffers[i];
+		if (ptr->len && context == ptr->context)
+			goto found;
+	}
+	/* Check if somebody is reserving a buffer for too long. */
+	for (i = 0; i < NUM_LINE_BUFFERS; i++) {
+		ptr = &printk_buffers[i];
+		if (!ptr->len || !time_after(now, ptr->last_used + 10 * HZ))
+			continue;
+		/* Forced flush due to timeout. */
+		log_store(facility, ptr->level, ptr->lflags, ptr->ts_nsec,
+			  NULL, 0, ptr->buf, ptr->len);
+		ptr->len = 0;
+	}
+	/* Check if this context can reserve a buffer. */
+	for (i = 0; i < NUM_LINE_BUFFERS; i++) {
+		ptr = &printk_buffers[i];
+		if (ptr->len == 0) {
+			ptr->context = context;
+			goto found;
 		}
-		/* Otherwise, make sure it's flushed */
-		cont_flush();
 	}
+	/* Forced assign due to out of buffers. */
+	ptr = &static_buffer;
+ found:
+	/* Forced flush due to log prefix or out of space. */
+	if (((ptr->len && (lflags & LOG_PREFIX)) ||
+	     text_len + ptr->len > sizeof(ptr->buf))) {
+		log_store(facility, ptr->level, ptr->lflags, ptr->ts_nsec,
+			  NULL, 0, ptr->buf, ptr->len);
+		ptr->len = 0;
+	}
+	if (ptr->len == 0) {
+		ptr->lflags = lflags;
+		ptr->level = level;
+		ptr->ts_nsec = local_clock();
+	}
+	/* 0 <= text_len <= LOG_LINE_MAX due to vscnprintf(). */
+	memmove(ptr->buf + ptr->len, text, text_len);
+	ptr->len += text_len;
+	/*
+	 * Flush already completed lines. By splitting at '\n', we can inject
+	 * caller id information to each line of text information.
+	 */
+	while (true) {
+		size_t len;
+		char *cp = memchr(ptr->buf, '\n', ptr->len);
 
-	/* Skip empty continuation lines that couldn't be added - they just flush */
-	if (!text_len && (lflags & LOG_CONT))
-		return 0;
-
-	/* If it doesn't end in a newline, try to buffer the current line */
-	if (!(lflags & LOG_NEWLINE)) {
-		if (cont_add(facility, level, lflags, text, text_len))
-			return text_len;
+		if (!cp)
+			break;
+		len = ++cp - ptr->buf;
+		log_store(facility, ptr->level, ptr->lflags, ptr->ts_nsec,
+			  NULL, 0, ptr->buf, len - 1);
+		ptr->len -= len;
+		memmove(ptr->buf, cp, ptr->len);
 	}
-
-	/* Store it in the record log */
-	return log_store(facility, level, lflags, 0, dict, dictlen, text, text_len);
+	/* Forced flush due to out of buffers. */
+	if (ptr == &static_buffer && ptr->len) {
+		log_store(facility, ptr->level, ptr->lflags, ptr->ts_nsec,
+			  NULL, 0, ptr->buf, ptr->len);
+		ptr->len = 0;
+	}
+	ptr->last_used = now;
 }
 
 /* Must be called under logbuf_lock. */
@@ -1857,12 +1934,6 @@ int vprintk_store(int facility, int level,
 	 */
 	text_len = vscnprintf(text, sizeof(textbuf), fmt, args);
 
-	/* mark and strip a trailing newline */
-	if (text_len && text[text_len-1] == '\n') {
-		text_len--;
-		lflags |= LOG_NEWLINE;
-	}
-
 	/* strip kernel syslog prefix and extract log level or control flags */
 	if (facility == 0) {
 		int kern_level;
@@ -1888,11 +1959,29 @@ int vprintk_store(int facility, int level,
 	if (level == LOGLEVEL_DEFAULT)
 		level = default_message_loglevel;
 
-	if (dict)
-		lflags |= LOG_PREFIX|LOG_NEWLINE;
+	if (dict) {
+		char *cp;
 
-	return log_output(facility, level, lflags,
-			  dict, dictlen, text, text_len);
+		/* Remove the trailing newline. */
+		if (text_len && text[text_len-1] == '\n')
+			text_len--;
+		/*
+		 * Remove any newline because we don't want to duplicate dict
+		 * information while we want to prefix caller id information to
+		 * each line of text information.
+		 */
+		while ((cp = memchr(text, '\n', text_len)) != NULL)
+			*cp = ' ';
+		log_store(facility, level, lflags | LOG_PREFIX | LOG_NEWLINE,
+			  local_clock(), dict, dictlen, text, text_len);
+		return text_len;
+	}
+
+	/* Add '\n' via line buffering for kernel-generated messages. */
+	if (!facility)
+		lflags |= LOG_NEWLINE;
+	log_output(facility, level, lflags, text, text_len);
+	return text_len;
 }
 
 asmlinkage int vprintk_emit(int facility, int level,
Steven Rostedt Nov. 23, 2018, 3:56 p.m. UTC | #24
On Fri, 23 Nov 2018 13:46:47 +0100
Petr Mladek <pmladek@suse.com> wrote:

> Steven told me on Plumbers conference that even few initial
> characters saved him a day few times.

Yes, and that has happened more than once. I would reboot and retest
code that is crashing, and due to a triple fault, the machine would
reboot because of some race, and the little output I get from the
console would help tremendously.

Remember, debugging the kernel is a lot like forensics, especially when
it's from a customer's site. You look at all the evidence that you can
get, and sometimes it's just 10 characters in the output that gives you
an idea of where things went wrong. I'm really not liking the buffering
idea because of this.

-- Steve
Tetsuo Handa Nov. 24, 2018, 12:24 a.m. UTC | #25
On 2018/11/24 0:56, Steven Rostedt wrote:
> On Fri, 23 Nov 2018 13:46:47 +0100
> Petr Mladek <pmladek@suse.com> wrote:
> 
>> Steven told me on Plumbers conference that even few initial
>> characters saved him a day few times.
> 
> Yes, and that has happened more than once. I would reboot and retest
> code that is crashing, and due to a triple fault, the machine would
> reboot because of some race, and the little output I get from the
> console would help tremendously.
> 
> Remember, debugging the kernel is a lot like forensics, especially when
> it's from a customer's site. You look at all the evidence that you can
> get, and sometimes it's just 10 characters in the output that gives you
> an idea of where things went wrong. I'm really not liking the buffering
> idea because of this.

Then, we should not enforce buffering in a way that requires modification of
printk() callers. That is, we should not ask printk() callers to use their
private buffer. What we can do is to enable/disable line buffering inside
printk() depending on the problem the user wants to debug.

Also, we should allow disabling "struct cont" depending on the problem (in
order to allow flushing the 10 characters in the "cont" buffer).



By the way, is the comment

  /*
   * Continuation lines are buffered, and not committed to the record buffer
   * until the line is complete, or a race forces it. The line fragments
   * though, are printed immediately to the consoles to ensure everything has
   * reached the console in case of a kernel crash.
   */

appropriate despite we don't call cont_flush() upon a kernel crash?
Sergey Senozhatsky Nov. 26, 2018, 4:34 a.m. UTC | #26
On (11/24/18 09:24), Tetsuo Handa wrote:
> >> Steven told me on Plumbers conference that even few initial
> >> characters saved him a day few times.
> > 
> > Yes, and that has happened more than once. I would reboot and retest
> > code that is crashing, and due to a triple fault, the machine would
> > reboot because of some race, and the little output I get from the
> > console would help tremendously.
> > 
> > Remember, debugging the kernel is a lot like forensics, especially when
> > it's from a customer's site. You look at all the evidence that you can
> > get, and sometimes it's just 10 characters in the output that gives you
> > an idea of where things went wrong. I'm really not liking the buffering
> > idea because of this.
> 
> Then, we should not enforce buffering in a way that requires modification of
> printk() callers. That is, we should not ask printk() callers to use their
> private buffer. What we can do is to enable/disable line buffering inside
> printk() depending on the problem the user wants to debug.

Right; overall I tend to agree with what you guys are saying and
I like Petr's "I am more and more wondering if the buffered printk
is worth the effort" comment; and I like Steven's comment on flushes;
and admire Tetsuo's efforts.

I think that printk_seq_buf/printk_buffer was never going to replace
pr_cont() and I never liked the idea. The printk_safe proposal for
lockdep had one OK thing about it - it would pass our normal marshaling
before it would reach the buffering stage. Which means - no buffering
for people who "detest" printk buffering.

This looks better:
	printk->vprint_func->{early_printk/printk_safe/vprintk_emit}->buffering

Than this:
	pr_buffer->buffering->vprintk_func->{early_printk/printk_safe/vprintk_emit}

Another thing is - printk seq_buf/printk_buffer doesn't really solve any
problem. People, who can use seq_buf/char buf[256]/etc. buffering, already
can do so; people who cannot - won't switch to a new buffering printk
anyway.

The bad thing about printk_safe proposal is that it's per-CPU; which
is OK for some paths (like lockdep), but not OK in general (e.g. OOM).

IMO, try_buffered_printk() attempts to solve the problem at the right
place - printk. And it does not break our normal marshaling, so we don't
"fix" printk users and we keep people, who does vprintk_func->early_printk
thing, happy. So I don't dislike try_buffered_printk() approach. And unlike
before, now we are talking about a single line buffering.

If we'd walk this way, I would prefer to NOT introduce any structs and
any new code, or any new "split and log_store() in the middle" rules. Just
a bunch of "struct cont" buffers:

	static struct cont conts[N];

and cont_add()/cont_flush() to handle pr_cont, with all the flushes it
does; but on a per-context basis.
conts[0] should serve as a fallback cont buffer, in case if there are
no available cont buffers left. flush_on_panic() is still miserable,
for sure; probably we can do something about it.

Or... Instead.
We can just leave pr_cont() alone for now. And make it possible to
reconstruct messages - IOW, inject some info to printk messages. We
do this at Samsung (inject CPU number at the beginning of every
message. `cat serial.0 | grep "\[1\]"` to grep for all messages from
CPU1). Probably this would be the simplest thing.

> Also, we should allow disabling "struct cont" depending on the problem (in
> order to allow flushing the 10 characters in the "cont" buffer).
>
> By the way, is the comment
>
>   /*
>    * Continuation lines are buffered, and not committed to the record buffer
>    * until the line is complete, or a race forces it. The line fragments
>    * though, are printed immediately to the consoles to ensure everything has
>    * reached the console in case of a kernel crash.
>    */

printk does not do this anymore; you are right.

> appropriate despite we don't call cont_flush() upon a kernel crash?

I tend to count on flush_on_panic more than on a "last moment"
pr_cont->cont_flush(), which was guaranteed to happen immediately
only with early_con.

A kernel crash usually has enough pr_emerg/printk-s to force cont flush.
Even pr_info() will do. We look at the loglevel much later; so even
messages which never make it to the consoles still flush cont buffer.

	-ss
David Laight Nov. 28, 2018, 1:29 p.m. UTC | #27
From: Steven Rostedt
> > Steven told me on Plumbers conference that even few initial
> > characters saved him a day few times.
> 
> Yes, and that has happened more than once. I would reboot and retest
> code that is crashing, and due to a triple fault, the machine would
> reboot because of some race, and the little output I get from the
> console would help tremendously.
> 
> Remember, debugging the kernel is a lot like forensics, especially when
> it's from a customer's site. You look at all the evidence that you can
> get, and sometimes it's just 10 characters in the output that gives you
> an idea of where things went wrong. I'm really not liking the buffering
> idea because of this.

Yep, it is a real PITA that syslogd (or linux equiv) stops messages being
written to the console by the kernel (which used to be synchronous) and
instead writes them out from userspace.
By that time it has all died.

Sometimes you want a printk() that writes the data to the serial port
before returning.

I also spent a week trying to work out why a customer kernel was
locking up - only to finally find out that the distro they were
using set 'panic on opps' - making it almost impossible to find
out what was happening.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Tetsuo Handa Nov. 29, 2018, 10:09 a.m. UTC | #28
On 2018/11/28 22:29, David Laight wrote:
> I also spent a week trying to work out why a customer kernel was
> locking up - only to finally find out that the distro they were
> using set 'panic on opps' - making it almost impossible to find
> out what was happening.

How can line buffering negatively affect this case? The current thread
which triggered an oops will keep calling printk() until that current
thread reaches panic(), won't it?

The only case where line buffering negatively affects will be that the
initial 10 bytes of a critical line failed to reach log_store() because
some other thread (not the current thread doing a series of printk())
halted CPUs / reset the whole machine _before_ the initial 10 bytes of
a critical line reaches log_store().



On 2018/11/26 13:34, Sergey Senozhatsky wrote:
> Or... Instead.
> We can just leave pr_cont() alone for now. And make it possible to
> reconstruct messages - IOW, inject some info to printk messages. We
> do this at Samsung (inject CPU number at the beginning of every
> message. `cat serial.0 | grep "\[1\]"` to grep for all messages from
> CPU1). Probably this would be the simplest thing.

Yes, I sent a patch which helps reconstructing messages at
http://lkml.kernel.org/r/1543045075-3008-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp .
Petr Mladek Nov. 30, 2018, 4:01 p.m. UTC | #29
On Thu 2018-11-29 19:09:26, Tetsuo Handa wrote:
> On 2018/11/28 22:29, David Laight wrote:
> > I also spent a week trying to work out why a customer kernel was
> > locking up - only to finally find out that the distro they were
> > using set 'panic on opps' - making it almost impossible to find
> > out what was happening.

Did the machine rebooted before the messages reached console or
did it produced crash-dump or frozen?

panic() tries relatively hard to flush the messages to the console,
see printk_safe_flush_on_panic() and console_flush_on_panic().
It is less aggressive when crashdump is called. It might deadlock
in console drivers.

Hmm, it might also fail when another CPU is still running and
steals console_lock. We might want to disable
console_trylock_spinning() if the caller is not
panic_cpu.


> On 2018/11/26 13:34, Sergey Senozhatsky wrote:
> > Or... Instead.
> > We can just leave pr_cont() alone for now. And make it possible to
> > reconstruct messages - IOW, inject some info to printk messages. We
> > do this at Samsung (inject CPU number at the beginning of every
> > message. `cat serial.0 | grep "\[1\]"` to grep for all messages from
> > CPU1). Probably this would be the simplest thing.
> 
> Yes, I sent a patch which helps reconstructing messages at
> http://lkml.kernel.org/r/1543045075-3008-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp .

All the buffering approaches have problems that cannot be solved
easily. The prefix-based approach looks like the best alternative
at the moment.

Best Regards,
Petr
diff mbox series

Patch

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 1efada2..fbc127d 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -493,7 +493,8 @@  void get_usage_chars(struct lock_class *class, char usage[LOCK_USAGE_CHARS])
 	usage[i] = '\0';
 }
 
-static void __print_lock_name(struct lock_class *class)
+static void __print_lock_name(struct lock_class *class,
+			      struct printk_buffer *buf)
 {
 	char str[KSYM_NAME_LEN];
 	const char *name;
@@ -501,25 +502,25 @@  static void __print_lock_name(struct lock_class *class)
 	name = class->name;
 	if (!name) {
 		name = __get_key_name(class->key, str);
-		printk(KERN_CONT "%s", name);
+		printk_buffered(buf, "%s", name);
 	} else {
-		printk(KERN_CONT "%s", name);
+		printk_buffered(buf, "%s", name);
 		if (class->name_version > 1)
-			printk(KERN_CONT "#%d", class->name_version);
+			printk_buffered(buf, "#%d", class->name_version);
 		if (class->subclass)
-			printk(KERN_CONT "/%d", class->subclass);
+			printk_buffered(buf, "/%d", class->subclass);
 	}
 }
 
-static void print_lock_name(struct lock_class *class)
+static void print_lock_name(struct lock_class *class, struct printk_buffer *buf)
 {
 	char usage[LOCK_USAGE_CHARS];
 
 	get_usage_chars(class, usage);
 
-	printk(KERN_CONT " (");
-	__print_lock_name(class);
-	printk(KERN_CONT "){%s}", usage);
+	printk_buffered(buf, " (");
+	__print_lock_name(class, buf);
+	printk_buffered(buf, "){%s}", usage);
 }
 
 static void print_lockdep_cache(struct lockdep_map *lock)
@@ -534,8 +535,9 @@  static void print_lockdep_cache(struct lockdep_map *lock)
 	printk(KERN_CONT "%s", name);
 }
 
-static void print_lock(struct held_lock *hlock)
+static void print_lock(struct held_lock *hlock, struct printk_buffer *buf)
 {
+	bool free = false;
 	/*
 	 * We can be called locklessly through debug_show_all_locks() so be
 	 * extra careful, the hlock might have been released and cleared.
@@ -546,17 +548,24 @@  static void print_lock(struct held_lock *hlock)
 	barrier();
 
 	if (!class_idx || (class_idx - 1) >= MAX_LOCKDEP_KEYS) {
-		printk(KERN_CONT "<RELEASED>\n");
+		printk_buffered(buf, "<RELEASED>\n");
 		return;
 	}
 
-	printk(KERN_CONT "%p", hlock->instance);
-	print_lock_name(lock_classes + class_idx - 1);
-	printk(KERN_CONT ", at: %pS\n", (void *)hlock->acquire_ip);
+	if (!buf) {
+		free = true;
+		buf = get_printk_buffer();
+	}
+	printk_buffered(buf, "%p", hlock->instance);
+	print_lock_name(lock_classes + class_idx - 1, buf);
+	printk_buffered(buf, ", at: %pS\n", (void *)hlock->acquire_ip);
+	if (free)
+		put_printk_buffer(buf);
 }
 
 static void lockdep_print_held_locks(struct task_struct *p)
 {
+	struct printk_buffer *buf;
 	int i, depth = READ_ONCE(p->lockdep_depth);
 
 	if (!depth)
@@ -570,10 +579,12 @@  static void lockdep_print_held_locks(struct task_struct *p)
 	 */
 	if (p->state == TASK_RUNNING && p != current)
 		return;
+	buf = get_printk_buffer();
 	for (i = 0; i < depth; i++) {
-		printk(" #%d: ", i);
-		print_lock(p->held_locks + i);
+		printk_buffered(buf, " #%d: ", i);
+		print_lock(p->held_locks + i, buf);
 	}
+	put_printk_buffer(buf);
 }
 
 static void print_kernel_ident(void)
@@ -1083,12 +1094,16 @@  static inline int __bfs_backwards(struct lock_list *src_entry,
 static noinline int
 print_circular_bug_entry(struct lock_list *target, int depth)
 {
+	struct printk_buffer *buf;
+
 	if (debug_locks_silent)
 		return 0;
-	printk("\n-> #%u", depth);
-	print_lock_name(target->class);
-	printk(KERN_CONT ":\n");
+	buf = get_printk_buffer();
+	printk_buffered(buf, "\n-> #%u", depth);
+	print_lock_name(target->class, buf);
+	printk_buffered(buf, ":\n");
 	print_stack_trace(&target->trace, 6);
+	put_printk_buffer(buf);
 
 	return 0;
 }
@@ -1098,6 +1113,7 @@  static inline int __bfs_backwards(struct lock_list *src_entry,
 			     struct held_lock *tgt,
 			     struct lock_list *prt)
 {
+	struct printk_buffer *buf = get_printk_buffer();
 	struct lock_class *source = hlock_class(src);
 	struct lock_class *target = hlock_class(tgt);
 	struct lock_class *parent = prt->class;
@@ -1116,31 +1132,32 @@  static inline int __bfs_backwards(struct lock_list *src_entry,
 	 * from the safe_class lock to the unsafe_class lock.
 	 */
 	if (parent != source) {
-		printk("Chain exists of:\n  ");
-		__print_lock_name(source);
-		printk(KERN_CONT " --> ");
-		__print_lock_name(parent);
-		printk(KERN_CONT " --> ");
-		__print_lock_name(target);
-		printk(KERN_CONT "\n\n");
+		printk_buffered(buf, "Chain exists of:\n  ");
+		__print_lock_name(source, buf);
+		printk_buffered(buf, " --> ");
+		__print_lock_name(parent, buf);
+		printk_buffered(buf, " --> ");
+		__print_lock_name(target, buf);
+		printk_buffered(buf, "\n\n");
 	}
 
 	printk(" Possible unsafe locking scenario:\n\n");
 	printk("       CPU0                    CPU1\n");
 	printk("       ----                    ----\n");
-	printk("  lock(");
-	__print_lock_name(target);
-	printk(KERN_CONT ");\n");
+	printk_buffered(buf, "  lock(");
+	__print_lock_name(target, buf);
+	printk_buffered(buf, ");\n");
 	printk("                               lock(");
-	__print_lock_name(parent);
-	printk(KERN_CONT ");\n");
-	printk("                               lock(");
-	__print_lock_name(target);
-	printk(KERN_CONT ");\n");
-	printk("  lock(");
-	__print_lock_name(source);
-	printk(KERN_CONT ");\n");
+	__print_lock_name(parent, buf);
+	printk_buffered(buf, ");\n");
+	printk_buffered(buf, "                               lock(");
+	__print_lock_name(target, buf);
+	printk_buffered(buf, ");\n");
+	printk_buffered(buf, "  lock(");
+	__print_lock_name(source, buf);
+	printk_buffered(buf, ");\n");
 	printk("\n *** DEADLOCK ***\n\n");
+	put_printk_buffer(buf);
 }
 
 /*
@@ -1164,11 +1181,11 @@  static inline int __bfs_backwards(struct lock_list *src_entry,
 	pr_warn("------------------------------------------------------\n");
 	pr_warn("%s/%d is trying to acquire lock:\n",
 		curr->comm, task_pid_nr(curr));
-	print_lock(check_src);
+	print_lock(check_src, NULL);
 
 	pr_warn("\nbut task is already holding lock:\n");
 
-	print_lock(check_tgt);
+	print_lock(check_tgt, NULL);
 	pr_warn("\nwhich lock already depends on the new lock.\n\n");
 	pr_warn("\nthe existing dependency chain (in reverse order) is:\n");
 
@@ -1387,21 +1404,23 @@  static inline int usage_match(struct lock_list *entry, void *bit)
 
 static void print_lock_class_header(struct lock_class *class, int depth)
 {
+	struct printk_buffer *buf = get_printk_buffer();
 	int bit;
 
-	printk("%*s->", depth, "");
-	print_lock_name(class);
+	printk_buffered(buf, "%*s->", depth, "");
+	print_lock_name(class, buf);
 #ifdef CONFIG_DEBUG_LOCKDEP
-	printk(KERN_CONT " ops: %lu", debug_class_ops_read(class));
+	printk_buffered(buf, " ops: %lu", debug_class_ops_read(class));
 #endif
-	printk(KERN_CONT " {\n");
+	printk_buffered(buf, " {\n");
 
 	for (bit = 0; bit < LOCK_USAGE_STATES; bit++) {
 		if (class->usage_mask & (1 << bit)) {
 			int len = depth;
 
-			len += printk("%*s   %s", depth, "", usage_str[bit]);
-			len += printk(KERN_CONT " at:\n");
+			len += printk_buffered(buf, "%*s   %s", depth, "",
+					       usage_str[bit]);
+			len += printk_buffered(buf, " at:\n");
 			print_stack_trace(class->usage_traces + bit, len);
 		}
 	}
@@ -1409,6 +1428,7 @@  static void print_lock_class_header(struct lock_class *class, int depth)
 
 	printk("%*s ... key      at: [<%px>] %pS\n",
 		depth, "", class->key, class->key);
+	put_printk_buffer(buf);
 }
 
 /*
@@ -1451,6 +1471,7 @@  static void print_lock_class_header(struct lock_class *class, int depth)
 	struct lock_class *safe_class = safe_entry->class;
 	struct lock_class *unsafe_class = unsafe_entry->class;
 	struct lock_class *middle_class = prev_class;
+	struct printk_buffer *buf = get_printk_buffer();
 
 	if (middle_class == safe_class)
 		middle_class = next_class;
@@ -1469,33 +1490,34 @@  static void print_lock_class_header(struct lock_class *class, int depth)
 	 * from the safe_class lock to the unsafe_class lock.
 	 */
 	if (middle_class != unsafe_class) {
-		printk("Chain exists of:\n  ");
-		__print_lock_name(safe_class);
-		printk(KERN_CONT " --> ");
-		__print_lock_name(middle_class);
-		printk(KERN_CONT " --> ");
-		__print_lock_name(unsafe_class);
-		printk(KERN_CONT "\n\n");
+		printk_buffered(buf, "Chain exists of:\n  ");
+		__print_lock_name(safe_class, buf);
+		printk_buffered(buf, " --> ");
+		__print_lock_name(middle_class, buf);
+		printk_buffered(buf, " --> ");
+		__print_lock_name(unsafe_class, buf);
+		printk_buffered(buf, "\n\n");
 	}
 
 	printk(" Possible interrupt unsafe locking scenario:\n\n");
 	printk("       CPU0                    CPU1\n");
 	printk("       ----                    ----\n");
-	printk("  lock(");
-	__print_lock_name(unsafe_class);
-	printk(KERN_CONT ");\n");
+	printk_buffered(buf, "  lock(");
+	__print_lock_name(unsafe_class, buf);
+	printk_buffered(buf, ");\n");
 	printk("                               local_irq_disable();\n");
-	printk("                               lock(");
-	__print_lock_name(safe_class);
-	printk(KERN_CONT ");\n");
-	printk("                               lock(");
-	__print_lock_name(middle_class);
-	printk(KERN_CONT ");\n");
+	printk_buffered(buf, "                               lock(");
+	__print_lock_name(safe_class, buf);
+	printk_buffered(buf, ");\n");
+	printk_buffered(buf, "                               lock(");
+	__print_lock_name(middle_class, buf);
+	printk_buffered(buf, ");\n");
 	printk("  <Interrupt>\n");
-	printk("    lock(");
-	__print_lock_name(safe_class);
-	printk(KERN_CONT ");\n");
+	printk_buffered(buf, "    lock(");
+	__print_lock_name(safe_class, buf);
+	printk_buffered(buf, ");\n");
 	printk("\n *** DEADLOCK ***\n\n");
+	put_printk_buffer(buf);
 }
 
 static int
@@ -1510,9 +1532,12 @@  static void print_lock_class_header(struct lock_class *class, int depth)
 			 enum lock_usage_bit bit2,
 			 const char *irqclass)
 {
+	struct printk_buffer *buf;
+
 	if (!debug_locks_off_graph_unlock() || debug_locks_silent)
 		return 0;
 
+	buf = get_printk_buffer();
 	pr_warn("\n");
 	pr_warn("=====================================================\n");
 	pr_warn("WARNING: %s-safe -> %s-unsafe lock order detected\n",
@@ -1525,26 +1550,26 @@  static void print_lock_class_header(struct lock_class *class, int depth)
 		curr->softirq_context, softirq_count() >> SOFTIRQ_SHIFT,
 		curr->hardirqs_enabled,
 		curr->softirqs_enabled);
-	print_lock(next);
+	print_lock(next, buf);
 
 	pr_warn("\nand this task is already holding:\n");
-	print_lock(prev);
+	print_lock(prev, buf);
 	pr_warn("which would create a new lock dependency:\n");
-	print_lock_name(hlock_class(prev));
-	pr_cont(" ->");
-	print_lock_name(hlock_class(next));
-	pr_cont("\n");
+	print_lock_name(hlock_class(prev), buf);
+	bpr_cont(buf, " ->");
+	print_lock_name(hlock_class(next), buf);
+	bpr_cont(buf, "\n");
 
 	pr_warn("\nbut this new dependency connects a %s-irq-safe lock:\n",
 		irqclass);
-	print_lock_name(backwards_entry->class);
-	pr_warn("\n... which became %s-irq-safe at:\n", irqclass);
+	print_lock_name(backwards_entry->class, buf);
+	bpr_warn(buf, "\n... which became %s-irq-safe at:\n", irqclass);
 
 	print_stack_trace(backwards_entry->class->usage_traces + bit1, 1);
 
 	pr_warn("\nto a %s-irq-unsafe lock:\n", irqclass);
-	print_lock_name(forwards_entry->class);
-	pr_warn("\n... which became %s-irq-unsafe at:\n", irqclass);
+	print_lock_name(forwards_entry->class, buf);
+	bpr_warn(buf, "\n... which became %s-irq-unsafe at:\n", irqclass);
 	pr_warn("...");
 
 	print_stack_trace(forwards_entry->class->usage_traces + bit2, 1);
@@ -1557,18 +1582,20 @@  static void print_lock_class_header(struct lock_class *class, int depth)
 
 	pr_warn("\nthe dependencies between %s-irq-safe lock and the holding lock:\n", irqclass);
 	if (!save_trace(&prev_root->trace))
-		return 0;
+		goto done;
 	print_shortest_lock_dependencies(backwards_entry, prev_root);
 
 	pr_warn("\nthe dependencies between the lock to be acquired");
 	pr_warn(" and %s-irq-unsafe lock:\n", irqclass);
 	if (!save_trace(&next_root->trace))
-		return 0;
+		goto done;
 	print_shortest_lock_dependencies(forwards_entry, next_root);
 
 	pr_warn("\nstack backtrace:\n");
 	dump_stack();
 
+ done:
+	put_printk_buffer(buf);
 	return 0;
 }
 
@@ -1721,18 +1748,20 @@  static inline void inc_chains(void)
 {
 	struct lock_class *next = hlock_class(nxt);
 	struct lock_class *prev = hlock_class(prv);
+	struct printk_buffer *buf = get_printk_buffer();
 
 	printk(" Possible unsafe locking scenario:\n\n");
 	printk("       CPU0\n");
 	printk("       ----\n");
-	printk("  lock(");
-	__print_lock_name(prev);
-	printk(KERN_CONT ");\n");
-	printk("  lock(");
-	__print_lock_name(next);
-	printk(KERN_CONT ");\n");
+	printk_buffered(buf, "  lock(");
+	__print_lock_name(prev, buf);
+	printk_buffered(buf, ");\n");
+	printk_buffered(buf, "  lock(");
+	__print_lock_name(next, buf);
+	printk_buffered(buf, ");\n");
 	printk("\n *** DEADLOCK ***\n\n");
 	printk(" May be due to missing lock nesting notation\n\n");
+	put_printk_buffer(buf);
 }
 
 static int
@@ -1749,9 +1778,9 @@  static inline void inc_chains(void)
 	pr_warn("--------------------------------------------\n");
 	pr_warn("%s/%d is trying to acquire lock:\n",
 		curr->comm, task_pid_nr(curr));
-	print_lock(next);
+	print_lock(next, NULL);
 	pr_warn("\nbut task is already holding lock:\n");
-	print_lock(prev);
+	print_lock(prev, NULL);
 
 	pr_warn("\nother info that might help us debug this:\n");
 	print_deadlock_scenario(next, prev);
@@ -2048,18 +2077,20 @@  static inline int get_first_held_lock(struct task_struct *curr,
 /*
  * Returns the next chain_key iteration
  */
-static u64 print_chain_key_iteration(int class_idx, u64 chain_key)
+static u64 print_chain_key_iteration(int class_idx, u64 chain_key,
+				     struct printk_buffer *buf)
 {
 	u64 new_chain_key = iterate_chain_key(chain_key, class_idx);
 
-	printk(" class_idx:%d -> chain_key:%016Lx",
+	printk_buffered(buf, " class_idx:%d -> chain_key:%016Lx",
 		class_idx,
 		(unsigned long long)new_chain_key);
 	return new_chain_key;
 }
 
-static void
-print_chain_keys_held_locks(struct task_struct *curr, struct held_lock *hlock_next)
+static void print_chain_keys_held_locks(struct task_struct *curr,
+					struct held_lock *hlock_next,
+					struct printk_buffer *buf)
 {
 	struct held_lock *hlock;
 	u64 chain_key = 0;
@@ -2069,16 +2100,18 @@  static u64 print_chain_key_iteration(int class_idx, u64 chain_key)
 	printk("depth: %u\n", depth + 1);
 	for (i = get_first_held_lock(curr, hlock_next); i < depth; i++) {
 		hlock = curr->held_locks + i;
-		chain_key = print_chain_key_iteration(hlock->class_idx, chain_key);
+		chain_key = print_chain_key_iteration(hlock->class_idx,
+						      chain_key, buf);
 
-		print_lock(hlock);
+		print_lock(hlock, buf);
 	}
 
-	print_chain_key_iteration(hlock_next->class_idx, chain_key);
-	print_lock(hlock_next);
+	print_chain_key_iteration(hlock_next->class_idx, chain_key, buf);
+	print_lock(hlock_next, buf);
 }
 
-static void print_chain_keys_chain(struct lock_chain *chain)
+static void print_chain_keys_chain(struct lock_chain *chain,
+				   struct printk_buffer *buf)
 {
 	int i;
 	u64 chain_key = 0;
@@ -2087,10 +2120,11 @@  static void print_chain_keys_chain(struct lock_chain *chain)
 	printk("depth: %u\n", chain->depth);
 	for (i = 0; i < chain->depth; i++) {
 		class_id = chain_hlocks[chain->base + i];
-		chain_key = print_chain_key_iteration(class_id + 1, chain_key);
+		chain_key = print_chain_key_iteration(class_id + 1, chain_key,
+						      buf);
 
-		print_lock_name(lock_classes + class_id);
-		printk("\n");
+		print_lock_name(lock_classes + class_id, buf);
+		printk_buffered(buf, "\n");
 	}
 }
 
@@ -2098,6 +2132,8 @@  static void print_collision(struct task_struct *curr,
 			struct held_lock *hlock_next,
 			struct lock_chain *chain)
 {
+	struct printk_buffer *buf = get_printk_buffer();
+
 	pr_warn("\n");
 	pr_warn("============================\n");
 	pr_warn("WARNING: chain_key collision\n");
@@ -2106,14 +2142,15 @@  static void print_collision(struct task_struct *curr,
 	pr_warn("%s/%d: ", current->comm, task_pid_nr(current));
 	pr_warn("Hash chain already cached but the contents don't match!\n");
 
-	pr_warn("Held locks:");
-	print_chain_keys_held_locks(curr, hlock_next);
+	bpr_warn(buf, "Held locks:");
+	print_chain_keys_held_locks(curr, hlock_next, buf);
 
-	pr_warn("Locks in cached chain:");
-	print_chain_keys_chain(chain);
+	bpr_warn(buf, "Locks in cached chain:");
+	print_chain_keys_chain(chain, buf);
 
 	pr_warn("\nstack backtrace:\n");
 	dump_stack();
+	put_printk_buffer(buf);
 }
 #endif
 
@@ -2421,18 +2458,20 @@  static void check_chain_key(struct task_struct *curr)
 print_usage_bug_scenario(struct held_lock *lock)
 {
 	struct lock_class *class = hlock_class(lock);
+	struct printk_buffer *buf = get_printk_buffer();
 
 	printk(" Possible unsafe locking scenario:\n\n");
 	printk("       CPU0\n");
 	printk("       ----\n");
-	printk("  lock(");
-	__print_lock_name(class);
-	printk(KERN_CONT ");\n");
+	printk_buffered(buf, "  lock(");
+	__print_lock_name(class, buf);
+	printk_buffered(buf, ");\n");
 	printk("  <Interrupt>\n");
-	printk("    lock(");
-	__print_lock_name(class);
-	printk(KERN_CONT ");\n");
+	printk_buffered(buf, "    lock(");
+	__print_lock_name(class, buf);
+	printk_buffered(buf, ");\n");
 	printk("\n *** DEADLOCK ***\n\n");
+	put_printk_buffer(buf);
 }
 
 static int
@@ -2457,7 +2496,7 @@  static void check_chain_key(struct task_struct *curr)
 		trace_softirq_context(curr), softirq_count() >> SOFTIRQ_SHIFT,
 		trace_hardirqs_enabled(curr),
 		trace_softirqs_enabled(curr));
-	print_lock(this);
+	print_lock(this, NULL);
 
 	pr_warn("{%s} state was registered at:\n", usage_str[prev_bit]);
 	print_stack_trace(hlock_class(this)->usage_traces + prev_bit, 1);
@@ -2500,6 +2539,7 @@  static int mark_lock(struct task_struct *curr, struct held_lock *this,
 			struct held_lock *this, int forwards,
 			const char *irqclass)
 {
+	struct printk_buffer *buf;
 	struct lock_list *entry = other;
 	struct lock_list *middle = NULL;
 	int depth;
@@ -2514,13 +2554,15 @@  static int mark_lock(struct task_struct *curr, struct held_lock *this,
 	pr_warn("--------------------------------------------------------\n");
 	pr_warn("%s/%d just changed the state of lock:\n",
 		curr->comm, task_pid_nr(curr));
-	print_lock(this);
+	print_lock(this, NULL);
 	if (forwards)
 		pr_warn("but this lock took another, %s-unsafe lock in the past:\n", irqclass);
 	else
 		pr_warn("but this lock was taken by another, %s-safe lock in the past:\n", irqclass);
-	print_lock_name(other->class);
-	pr_warn("\n\nand interrupts could create inverse lock ordering between them.\n\n");
+	buf = get_printk_buffer();
+	print_lock_name(other->class, buf);
+	bpr_warn(buf, "\n\nand interrupts could create inverse lock ordering between them.\n\n");
+	put_printk_buffer(buf);
 
 	pr_warn("\nother info that might help us debug this:\n");
 
@@ -3077,7 +3119,7 @@  static int mark_lock(struct task_struct *curr, struct held_lock *this,
 	 */
 	if (ret == 2) {
 		printk("\nmarked lock as {%s}:\n", usage_str[new_bit]);
-		print_lock(this);
+		print_lock(this, NULL);
 		print_irqtrace_events(curr);
 		dump_stack();
 	}
@@ -3172,7 +3214,7 @@  void lockdep_init_map(struct lockdep_map *lock, const char *name,
 	pr_warn("----------------------------------\n");
 
 	pr_warn("%s/%d is trying to lock:\n", curr->comm, task_pid_nr(curr));
-	print_lock(hlock);
+	print_lock(hlock, NULL);
 
 	pr_warn("\nbut this task is not holding:\n");
 	pr_warn("%s\n", hlock->nest_lock->name);
@@ -4300,7 +4342,7 @@  void __init lockdep_init(void)
 	pr_warn("-------------------------\n");
 	pr_warn("%s/%d is freeing memory %px-%px, with a lock still held there!\n",
 		curr->comm, task_pid_nr(curr), mem_from, mem_to-1);
-	print_lock(hlock);
+	print_lock(hlock, NULL);
 	lockdep_print_held_locks(curr);
 
 	pr_warn("\nstack backtrace:\n");