[0/5,v3] rq-qos memory barrier shenanigans
mbox series

Message ID 20190716201929.79142-1-josef@toxicpanda.com
Headers show
Series
  • rq-qos memory barrier shenanigans
Related show

Message

Josef Bacik July 16, 2019, 8:19 p.m. UTC
This is the patch series to address the hang we saw in production because of
missed wakeups, and the other issues that Oleg noticed while reviewing the code.

v2->v3:
- apparently I don't understand what READ/WRITE_ONCE does
- set ourselves to TASK_UNINTERRUPTIBLE on wakeup just in case
- add a comment about why we don't need a mb for the first data.token check
  which I'm sure Oleg will tell me is wrong and I'll have to send a v4

v1->v2:
- rename wq_has_multiple_sleepers to wq_has_single_sleeper
- fix the check for has_sleepers in the missed wake-ups patch
- fix the barrier issues around got_token that Oleg noticed
- dropped the has_sleepers reset that Oleg noticed we didn't need

Thanks,

Josef

Comments

Oleg Nesterov July 18, 2019, 3:56 p.m. UTC | #1
On 07/16, Josef Bacik wrote:
>
> - add a comment about why we don't need a mb for the first data.token check
>   which I'm sure Oleg will tell me is wrong and I'll have to send a v4

we don't need it because prepare_to_wait_exclusive() does set_current_state()
and this implies mb().

(in fact I think in this particular case it is not needed at all because
 rq_qos_wake_function() sets condition == T under wq_head->lock)

I see nothing wrong in this series. Feel free to add

Reviewed-by: Oleg Nesterov <oleg@redhat.com>

But why does it use wq_has_sleeper() ? I do not understand why do we need
mb() before waitqueue_active() in this particular case...


and just for record. iiuc acquire_inflight_cb() can't block, so it is not
clear to me why performance-wise this all is actually better than just

	void rq_qos_wait(struct rq_wait *rqw, void *private_data,
			 acquire_inflight_cb_t *acquire_inflight_cb)
	{
		struct rq_qos_wait_data data = {
			.wq = {
				.flags	= WQ_FLAG_EXCLUSIVE;
				.func	= rq_qos_wake_function,
				.entry	= LIST_HEAD_INIT(data.wq.entry),
			},
			.task = current,
			.rqw = rqw,
			.cb = acquire_inflight_cb,
			.private_data = private_data,
		};

		if (!wq_has_sleeper(&rqw->wait) && acquire_inflight_cb(rqw, private_data))
			return;

		spin_lock_irq(&rqw->wait.lock);
		if (list_empty(&wq_entry->entry) && acquire_inflight_cb(rqw, private_data))
			data.got_token = true;
		else
			__add_wait_queue_entry_tail(&rqw->wait, &data.wq);
		spin_unlock_irq(&rqw->wait.lock);

		for (;;) {
			set_current_state(TASK_UNINTERRUPTIBLE);
			if (data.got_token)
				break;
			io_schedule();
		}
		finish_wait(&rqw->wait, &data.wq);
	}

note also that the acquire_inflight_cb argument goes away.

Nevermind, feel free to ignore.

Oleg.
Jens Axboe July 18, 2019, 4:20 p.m. UTC | #2
On 7/16/19 2:19 PM, Josef Bacik wrote:
> This is the patch series to address the hang we saw in production because of
> missed wakeups, and the other issues that Oleg noticed while reviewing the code.
> 
> v2->v3:
> - apparently I don't understand what READ/WRITE_ONCE does
> - set ourselves to TASK_UNINTERRUPTIBLE on wakeup just in case
> - add a comment about why we don't need a mb for the first data.token check
>    which I'm sure Oleg will tell me is wrong and I'll have to send a v4
> 
> v1->v2:
> - rename wq_has_multiple_sleepers to wq_has_single_sleeper
> - fix the check for has_sleepers in the missed wake-ups patch
> - fix the barrier issues around got_token that Oleg noticed
> - dropped the has_sleepers reset that Oleg noticed we didn't need

Thanks Josef, applied for 5.3.