Message ID | 20170322021443.26397-1-tom.leiming@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 03/21/2017 10:14 PM, Ming Lei wrote: > When iterating busy requests in timeout handler, > if the STARTED flag of one request isn't set, that means > the request is being processed in block layer or driver, and > isn't submitted to hardware yet. > > In current implementation of blk_mq_check_expired(), > if the request queue becomes dying, un-started requests are > handled as being completed/freed immediately. This way is > wrong, and can cause rq corruption or double allocation[1][2], > when doing I/O and removing&resetting NVMe device at the sametime. I agree, completing it looks bogus. If the request is in a scheduler or on a software queue, this won't end well at all. Looks like it was introduced by this patch: commit eb130dbfc40eabcd4e10797310bda6b9f6dd7e76 Author: Keith Busch <keith.busch@intel.com> Date: Thu Jan 8 08:59:53 2015 -0700 blk-mq: End unstarted requests on a dying queue Before that, we just ignored it. Keith?
On Tue, Mar 21, 2017 at 11:03:59PM -0400, Jens Axboe wrote: > On 03/21/2017 10:14 PM, Ming Lei wrote: > > When iterating busy requests in timeout handler, > > if the STARTED flag of one request isn't set, that means > > the request is being processed in block layer or driver, and > > isn't submitted to hardware yet. > > > > In current implementation of blk_mq_check_expired(), > > if the request queue becomes dying, un-started requests are > > handled as being completed/freed immediately. This way is > > wrong, and can cause rq corruption or double allocation[1][2], > > when doing I/O and removing&resetting NVMe device at the sametime. > > I agree, completing it looks bogus. If the request is in a scheduler or > on a software queue, this won't end well at all. Looks like it was > introduced by this patch: > > commit eb130dbfc40eabcd4e10797310bda6b9f6dd7e76 > Author: Keith Busch <keith.busch@intel.com> > Date: Thu Jan 8 08:59:53 2015 -0700 > > blk-mq: End unstarted requests on a dying queue > > Before that, we just ignored it. Keith? The above was intended for a stopped hctx on a dying queue such that there's nothing in flight to the driver. Nvme had been relying on this to end unstarted requests so we may progress when a controller dies. We've since obviated the need: we restart the hw queues to flush entered requests to failure, so we don't need that brokenness.
On 03/22/2017 11:58 AM, Keith Busch wrote: > On Tue, Mar 21, 2017 at 11:03:59PM -0400, Jens Axboe wrote: >> On 03/21/2017 10:14 PM, Ming Lei wrote: >>> When iterating busy requests in timeout handler, >>> if the STARTED flag of one request isn't set, that means >>> the request is being processed in block layer or driver, and >>> isn't submitted to hardware yet. >>> >>> In current implementation of blk_mq_check_expired(), >>> if the request queue becomes dying, un-started requests are >>> handled as being completed/freed immediately. This way is >>> wrong, and can cause rq corruption or double allocation[1][2], >>> when doing I/O and removing&resetting NVMe device at the sametime. >> >> I agree, completing it looks bogus. If the request is in a scheduler or >> on a software queue, this won't end well at all. Looks like it was >> introduced by this patch: >> >> commit eb130dbfc40eabcd4e10797310bda6b9f6dd7e76 >> Author: Keith Busch <keith.busch@intel.com> >> Date: Thu Jan 8 08:59:53 2015 -0700 >> >> blk-mq: End unstarted requests on a dying queue >> >> Before that, we just ignored it. Keith? > > The above was intended for a stopped hctx on a dying queue such that > there's nothing in flight to the driver. Nvme had been relying on this > to end unstarted requests so we may progress when a controller dies. > > We've since obviated the need: we restart the hw queues to flush entered > requests to failure, so we don't need that brokenness. Good, thanks for confirming, Keith. I queued up the patch for 4.11 this morning.
On Wed, Mar 22, 2017 at 11:58:17AM -0400, Keith Busch wrote: > On Tue, Mar 21, 2017 at 11:03:59PM -0400, Jens Axboe wrote: > > On 03/21/2017 10:14 PM, Ming Lei wrote: > > > When iterating busy requests in timeout handler, > > > if the STARTED flag of one request isn't set, that means > > > the request is being processed in block layer or driver, and > > > isn't submitted to hardware yet. > > > > > > In current implementation of blk_mq_check_expired(), > > > if the request queue becomes dying, un-started requests are > > > handled as being completed/freed immediately. This way is > > > wrong, and can cause rq corruption or double allocation[1][2], > > > when doing I/O and removing&resetting NVMe device at the sametime. > > > > I agree, completing it looks bogus. If the request is in a scheduler or > > on a software queue, this won't end well at all. Looks like it was > > introduced by this patch: > > > > commit eb130dbfc40eabcd4e10797310bda6b9f6dd7e76 > > Author: Keith Busch <keith.busch@intel.com> > > Date: Thu Jan 8 08:59:53 2015 -0700 > > > > blk-mq: End unstarted requests on a dying queue > > > > Before that, we just ignored it. Keith? > > The above was intended for a stopped hctx on a dying queue such that > there's nothing in flight to the driver. Nvme had been relying on this > to end unstarted requests so we may progress when a controller dies. So the brokenness started just from the begining. > > We've since obviated the need: we restart the hw queues to flush entered > requests to failure, so we don't need that brokenness. Looks the following commit need to be backported too if we port this patch. commit 69d9a99c258eb1d6478fd9608a2070890797eed7 Author: Keith Busch <keith.busch@intel.com> Date: Wed Feb 24 09:15:56 2016 -0700 NVMe: Move error handling to failed reset handler Thanks, Ming
diff --git a/block/blk-mq.c b/block/blk-mq.c index a4546f060e80..08a49c69738b 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -697,17 +697,8 @@ static void blk_mq_check_expired(struct blk_mq_hw_ctx *hctx, { struct blk_mq_timeout_data *data = priv; - if (!test_bit(REQ_ATOM_STARTED, &rq->atomic_flags)) { - /* - * If a request wasn't started before the queue was - * marked dying, kill it here or it'll go unnoticed. - */ - if (unlikely(blk_queue_dying(rq->q))) { - rq->errors = -EIO; - blk_mq_end_request(rq, rq->errors); - } + if (!test_bit(REQ_ATOM_STARTED, &rq->atomic_flags)) return; - } if (time_after_eq(jiffies, rq->deadline)) { if (!blk_mark_rq_complete(rq))