diff mbox series

[-next,2/3] blktrace: fix possible memleak in '__blk_trace_remove'

Message ID 20221017065321.2846017-3-yebin10@huawei.com (mailing list archive)
State New, archived
Headers show
Series fix possible memleak in '__blk_trace_remove' | expand

Commit Message

yebin (H) Oct. 17, 2022, 6:53 a.m. UTC
When test as follows:
step1: ioctl(sda, BLKTRACESETUP, &arg)
step2: ioctl(sda, BLKTRACESTART, NULL)
step3: ioctl(sda, BLKTRACETEARDOWN, NULL)
step4: ioctl(sda, BLKTRACESETUP, &arg)
Got issue as follows:
debugfs: File 'dropped' in directory 'sda' already present!
debugfs: File 'msg' in directory 'sda' already present!
debugfs: File 'trace0' in directory 'sda' already present!

And also find syzkaller report issue like "KASAN: use-after-free Read in relay_switch_subbuf"
"https://syzkaller.appspot.com/bug?id=13849f0d9b1b818b087341691be6cc3ac6a6bfb7"

If remove block trace without stop(BLKTRACESTOP) block trace, '__blk_trace_remove'
will just set 'q->blk_trace' with NULL. However, debugfs file isn't removed, so
will report file already present when call BLKTRACESETUP.
static int __blk_trace_remove(struct request_queue *q)
{
        struct blk_trace *bt;

        bt = rcu_replace_pointer(q->blk_trace, NULL,
                                 lockdep_is_held(&q->debugfs_mutex));
        if (!bt)
                return -EINVAL;

	if (bt->trace_state != Blktrace_running)
        	blk_trace_cleanup(q, bt);

        return 0;
}

If do test as follows:
step1: ioctl(sda, BLKTRACESETUP, &arg)
step2: ioctl(sda, BLKTRACESTART, NULL)
step3: ioctl(sda, BLKTRACETEARDOWN, NULL)
step4: remove sda

There will remove debugfs directory which will remove recursively all file
under directory.
>> blk_release_queue
>>	debugfs_remove_recursive(q->debugfs_dir)
So all files which created in 'do_blk_trace_setup' are removed, and
'dentry->d_inode' is NULL. But 'q->blk_trace' is still in 'running_trace_lock',
'trace_note_tsk' will traverse 'running_trace_lock' all nodes.
>>trace_note_tsk
>>  trace_note
>>    relay_reserve
>>       relay_switch_subbuf
>>        d_inode(buf->dentry)->i_size

To solve above issues, reference commit '5afedf670caf', first stop block trace
when block trace state is 'Blktrace_running' in '__blk_trace_remove'.

Signed-off-by: Ye Bin <yebin10@huawei.com>
---
 kernel/trace/blktrace.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

Comments

Yu Kuai Oct. 18, 2022, 7:38 a.m. UTC | #1
Hi,

在 2022/10/17 14:53, Ye Bin 写道:
> To solve above issues, reference commit '5afedf670caf', first stop block trace
> when block trace state is 'Blktrace_running' in '__blk_trace_remove'.

Will it be much simpler if we just return -EBUSY in blk_trace_remove if
state is still running?

And similar checking in blk_trace_setup.

Thanks,
Kuai
Christoph Hellwig Oct. 18, 2022, 7:41 a.m. UTC | #2
On Mon, Oct 17, 2022 at 02:53:20PM +0800, Ye Bin wrote:
> +	if (bt->trace_state == Blktrace_running)
> +		blk_trace_switch_state(bt, 0);

AFAICS blk_trace_switch_state already has that state check, so there
should be no need to duplicate it here.

I think having this call in blk_trace_cleanup itself might be a little
more obvious, too.
diff mbox series

Patch

diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
index edd83e213580..0d93a0110ab5 100644
--- a/kernel/trace/blktrace.c
+++ b/kernel/trace/blktrace.c
@@ -395,8 +395,10 @@  static int __blk_trace_remove(struct request_queue *q)
 	if (!bt)
 		return -EINVAL;
 
-	if (bt->trace_state != Blktrace_running)
-		blk_trace_cleanup(q, bt);
+	if (bt->trace_state == Blktrace_running)
+		blk_trace_switch_state(bt, 0);
+
+	blk_trace_cleanup(q, bt);
 
 	return 0;
 }