diff mbox

[V13,07/10] mmc: block: blk-mq: Add support for direct completion

Message ID 1509715220-31885-8-git-send-email-adrian.hunter@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Adrian Hunter Nov. 3, 2017, 1:20 p.m. UTC
For blk-mq, add support for completing requests directly in the ->done
callback. That means that error handling and urgent background operations
must be handled by recovery_work in that case.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 drivers/mmc/core/block.c | 100 +++++++++++++++++++++++++++++++++++++++++------
 drivers/mmc/core/block.h |   1 +
 drivers/mmc/core/queue.c |   5 ++-
 drivers/mmc/core/queue.h |   6 +++
 include/linux/mmc/host.h |   1 +
 5 files changed, 101 insertions(+), 12 deletions(-)

Comments

Linus Walleij Nov. 8, 2017, 9:28 a.m. UTC | #1
On Fri, Nov 3, 2017 at 2:20 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:

> For blk-mq, add support for completing requests directly in the ->done
> callback. That means that error handling and urgent background operations
> must be handled by recovery_work in that case.
>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>

I tried enabling this on my MMC host (mmci) but I got weird
DMA error messages when I did.

I guess this has not been tested on a non-DMA-coherent
system?

I think I might be seeing this because the .pre and .post
callbacks need to be strictly sequenced, and this is
maybe not taken into account here? Isn't there as risk
that the .post callback of the next request is called before
the .post callback of the previous request has returned
for example?

Yours,
Linus Walleij
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Adrian Hunter Nov. 9, 2017, 7:27 a.m. UTC | #2
On 08/11/17 11:28, Linus Walleij wrote:
> On Fri, Nov 3, 2017 at 2:20 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:
> 
>> For blk-mq, add support for completing requests directly in the ->done
>> callback. That means that error handling and urgent background operations
>> must be handled by recovery_work in that case.
>>
>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> 
> I tried enabling this on my MMC host (mmci) but I got weird
> DMA error messages when I did.
> 
> I guess this has not been tested on a non-DMA-coherent
> system?

I don't see what DMA-coherence has to do with anything.

Possibilities:
	- DMA unmapping doesn't work in an atomic context
	- requests' DMA operations have to be synchronized with each other

> I think I might be seeing this because the .pre and .post
> callbacks need to be strictly sequenced, and this is
> maybe not taken into account here?

I looked at mmci but that did not seem to be the case.

> Isn't there as risk
> that the .post callback of the next request is called before
> the .post callback of the previous request has returned
> for example?

Of course, the requests are treated as independent.  If the separate DMA
operations require synchronization, that is for the host driver to fix.
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Linus Walleij Nov. 9, 2017, 12:34 p.m. UTC | #3
On Thu, Nov 9, 2017 at 8:27 AM, Adrian Hunter <adrian.hunter@intel.com> wrote:
> On 08/11/17 11:28, Linus Walleij wrote:
>> On Fri, Nov 3, 2017 at 2:20 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:
>>
>>> For blk-mq, add support for completing requests directly in the ->done
>>> callback. That means that error handling and urgent background operations
>>> must be handled by recovery_work in that case.
>>>
>>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
>>
>> I tried enabling this on my MMC host (mmci) but I got weird
>> DMA error messages when I did.
>>
>> I guess this has not been tested on a non-DMA-coherent
>> system?
>
> I don't see what DMA-coherence has to do with anything.
>
> Possibilities:
>         - DMA unmapping doesn't work in an atomic context
>         - requests' DMA operations have to be synchronized with each other

So since MMCI need the post_req() hook called with
an error code to properly tear down any DMA operations,
I was worried that maybe your error path is not doing this
(passing an error code or calling in the right order).

I had a bunch of fallouts in my own patch set relating
to that.

>> I think I might be seeing this because the .pre and .post
>> callbacks need to be strictly sequenced, and this is
>> maybe not taken into account here?
>
> I looked at mmci but that did not seem to be the case.
>
>> Isn't there as risk
>> that the .post callback of the next request is called before
>> the .post callback of the previous request has returned
>> for example?
>
> Of course, the requests are treated as independent.  If the separate DMA
> operations require synchronization, that is for the host driver to fix.

They are treated as independent by the block layer but
it is the subsystems duty to serialize them for the hardware,

MMCI strictly requires that pre/post hooks per request
happen in the right order, so if you have prepared a second
request after submitting the first, and the first fails, you have
to back out by unpreparing the second one before unpreparing
the first. It is also the only host driver requireing to be passed
an error code in the last parameter to the post hook in
order to work properly.

I think your patch set handles that nicely though, because I
haven't seen any errors, it's just when we do this direct
completion I see problems.

Yours,
Linus Walleij
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Ulf Hansson Nov. 9, 2017, 1:07 p.m. UTC | #4
On 3 November 2017 at 14:20, Adrian Hunter <adrian.hunter@intel.com> wrote:
> For blk-mq, add support for completing requests directly in the ->done
> callback. That means that error handling and urgent background operations
> must be handled by recovery_work in that case.

As the mmc docs sucks, I think it's important that we elaborate a bit
more on the constraints this has on the host driver, here in the
changelog.

Something along the lines, "Using MMC_CAP_DIRECT_COMPLETE requires the
host driver, when calling mmc_request_done(), to cope with that its
->post_req() callback may be called immediately from the same context,
etc.."

Otherwise this looks good to me.

Kind regards
Uffe

>
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> ---
>  drivers/mmc/core/block.c | 100 +++++++++++++++++++++++++++++++++++++++++------
>  drivers/mmc/core/block.h |   1 +
>  drivers/mmc/core/queue.c |   5 ++-
>  drivers/mmc/core/queue.h |   6 +++
>  include/linux/mmc/host.h |   1 +
>  5 files changed, 101 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
> index e8be17152884..cbb4b35a592d 100644
> --- a/drivers/mmc/core/block.c
> +++ b/drivers/mmc/core/block.c
> @@ -2086,6 +2086,22 @@ static void mmc_blk_rw_recovery(struct mmc_queue *mq, struct request *req)
>         }
>  }
>
> +static inline bool mmc_blk_rq_error(struct mmc_blk_request *brq)
> +{
> +       mmc_blk_eval_resp_error(brq);
> +
> +       return brq->sbc.error || brq->cmd.error || brq->stop.error ||
> +              brq->data.error || brq->cmd.resp[0] & CMD_ERRORS;
> +}
> +
> +static inline void mmc_blk_rw_reset_success(struct mmc_queue *mq,
> +                                           struct request *req)
> +{
> +       int type = rq_data_dir(req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE;
> +
> +       mmc_blk_reset_success(mq->blkdata, type);
> +}
> +
>  static void mmc_blk_mq_complete_rq(struct mmc_queue *mq, struct request *req)
>  {
>         struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req);
> @@ -2167,14 +2183,43 @@ static void mmc_blk_mq_post_req(struct mmc_queue *mq, struct request *req)
>         if (host->ops->post_req)
>                 host->ops->post_req(host, mrq, 0);
>
> -       blk_mq_complete_request(req);
> +       /*
> +        * Block layer timeouts race with completions which means the normal
> +        * completion path cannot be used during recovery.
> +        */
> +       if (mq->in_recovery)
> +               mmc_blk_mq_complete_rq(mq, req);
> +       else
> +               blk_mq_complete_request(req);
>
>         mmc_blk_mq_acct_req_done(mq, req);
>  }
>
> +void mmc_blk_mq_recovery(struct mmc_queue *mq)
> +{
> +       struct request *req = mq->recovery_req;
> +       struct mmc_host *host = mq->card->host;
> +       struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req);
> +
> +       mq->recovery_req = NULL;
> +       mq->rw_wait = false;
> +
> +       if (mmc_blk_rq_error(&mqrq->brq)) {
> +               mmc_retune_hold_now(host);
> +               mmc_blk_rw_recovery(mq, req);
> +       }
> +
> +       mmc_blk_urgent_bkops(mq, mqrq);
> +
> +       mmc_blk_mq_post_req(mq, req);
> +}
> +
>  static void mmc_blk_mq_complete_prev_req(struct mmc_queue *mq,
>                                          struct request **prev_req)
>  {
> +       if (mmc_queue_direct_complete(mq->card->host))
> +               return;
> +
>         mutex_lock(&mq->complete_lock);
>
>         if (!mq->complete_req)
> @@ -2208,19 +2253,43 @@ static void mmc_blk_mq_req_done(struct mmc_request *mrq)
>         struct request *req = mmc_queue_req_to_req(mqrq);
>         struct request_queue *q = req->q;
>         struct mmc_queue *mq = q->queuedata;
> +       struct mmc_host *host = mq->card->host;
>         unsigned long flags;
> -       bool waiting;
>
> -       spin_lock_irqsave(q->queue_lock, flags);
> -       mq->complete_req = req;
> -       mq->rw_wait = false;
> -       waiting = mq->waiting;
> -       spin_unlock_irqrestore(q->queue_lock, flags);
> +       if (!mmc_queue_direct_complete(host)) {
> +               bool waiting;
> +
> +               spin_lock_irqsave(q->queue_lock, flags);
> +               mq->complete_req = req;
> +               mq->rw_wait = false;
> +               waiting = mq->waiting;
> +               spin_unlock_irqrestore(q->queue_lock, flags);
> +
> +               if (waiting)
> +                       wake_up(&mq->wait);
> +               else
> +                       kblockd_schedule_work(&mq->complete_work);
>
> -       if (waiting)
> +               return;
> +       }
> +
> +       if (mmc_blk_rq_error(&mqrq->brq) ||
> +           mmc_blk_urgent_bkops_needed(mq, mqrq)) {
> +               spin_lock_irqsave(q->queue_lock, flags);
> +               mq->recovery_needed = true;
> +               mq->recovery_req = req;
> +               spin_unlock_irqrestore(q->queue_lock, flags);
>                 wake_up(&mq->wait);
> -       else
> -               kblockd_schedule_work(&mq->complete_work);
> +               schedule_work(&mq->recovery_work);
> +               return;
> +       }
> +
> +       mmc_blk_rw_reset_success(mq, req);
> +
> +       mq->rw_wait = false;
> +       wake_up(&mq->wait);
> +
> +       mmc_blk_mq_post_req(mq, req);
>  }
>
>  static bool mmc_blk_rw_wait_cond(struct mmc_queue *mq, int *err)
> @@ -2230,7 +2299,12 @@ static bool mmc_blk_rw_wait_cond(struct mmc_queue *mq, int *err)
>         bool done;
>
>         spin_lock_irqsave(q->queue_lock, flags);
> -       done = !mq->rw_wait;
> +       if (mq->recovery_needed) {
> +               *err = -EBUSY;
> +               done = true;
> +       } else {
> +               done = !mq->rw_wait;
> +       }
>         mq->waiting = !done;
>         spin_unlock_irqrestore(q->queue_lock, flags);
>
> @@ -2277,6 +2351,10 @@ static int mmc_blk_mq_issue_rw_rq(struct mmc_queue *mq,
>         if (err)
>                 mq->rw_wait = false;
>
> +       /* Release re-tuning here where there is no synchronization required */
> +       if (mmc_queue_direct_complete(host))
> +               mmc_retune_release(host);
> +
>  out_post_req:
>         if (err && host->ops->post_req)
>                 host->ops->post_req(host, &mqrq->brq.mrq, err);
> diff --git a/drivers/mmc/core/block.h b/drivers/mmc/core/block.h
> index 6c0e98c1af71..5ad22c1c0318 100644
> --- a/drivers/mmc/core/block.h
> +++ b/drivers/mmc/core/block.h
> @@ -12,6 +12,7 @@
>
>  enum mmc_issued mmc_blk_mq_issue_rq(struct mmc_queue *mq, struct request *req);
>  void mmc_blk_mq_complete(struct request *req);
> +void mmc_blk_mq_recovery(struct mmc_queue *mq);
>
>  struct work_struct;
>
> diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
> index 971f97698866..bcba2995c767 100644
> --- a/drivers/mmc/core/queue.c
> +++ b/drivers/mmc/core/queue.c
> @@ -165,7 +165,10 @@ static void mmc_mq_recovery_handler(struct work_struct *work)
>
>         mq->in_recovery = true;
>
> -       mmc_blk_cqe_recovery(mq);
> +       if (mq->use_cqe)
> +               mmc_blk_cqe_recovery(mq);
> +       else
> +               mmc_blk_mq_recovery(mq);
>
>         mq->in_recovery = false;
>
> diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
> index f05b5a9d2f87..9bbfbb1fad7b 100644
> --- a/drivers/mmc/core/queue.h
> +++ b/drivers/mmc/core/queue.h
> @@ -102,6 +102,7 @@ struct mmc_queue {
>         bool                    waiting;
>         struct work_struct      recovery_work;
>         wait_queue_head_t       wait;
> +       struct request          *recovery_req;
>         struct request          *complete_req;
>         struct mutex            complete_lock;
>         struct work_struct      complete_work;
> @@ -133,4 +134,9 @@ static inline int mmc_cqe_qcnt(struct mmc_queue *mq)
>                mq->in_flight[MMC_ISSUE_ASYNC];
>  }
>
> +static inline bool mmc_queue_direct_complete(struct mmc_host *host)
> +{
> +       return host->caps & MMC_CAP_DIRECT_COMPLETE;
> +}
> +
>  #endif
> diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
> index ce2075d6f429..4b68a95a8818 100644
> --- a/include/linux/mmc/host.h
> +++ b/include/linux/mmc/host.h
> @@ -324,6 +324,7 @@ struct mmc_host {
>  #define MMC_CAP_DRIVER_TYPE_A  (1 << 23)       /* Host supports Driver Type A */
>  #define MMC_CAP_DRIVER_TYPE_C  (1 << 24)       /* Host supports Driver Type C */
>  #define MMC_CAP_DRIVER_TYPE_D  (1 << 25)       /* Host supports Driver Type D */
> +#define MMC_CAP_DIRECT_COMPLETE        (1 << 27)       /* RW reqs can be completed within mmc_request_done() */
>  #define MMC_CAP_CD_WAKE                (1 << 28)       /* Enable card detect wake */
>  #define MMC_CAP_CMD_DURING_TFR (1 << 29)       /* Commands during data transfer */
>  #define MMC_CAP_CMD23          (1 << 30)       /* CMD23 supported. */
> --
> 1.9.1
>
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Adrian Hunter Nov. 9, 2017, 1:15 p.m. UTC | #5
On 09/11/17 15:07, Ulf Hansson wrote:
> On 3 November 2017 at 14:20, Adrian Hunter <adrian.hunter@intel.com> wrote:
>> For blk-mq, add support for completing requests directly in the ->done
>> callback. That means that error handling and urgent background operations
>> must be handled by recovery_work in that case.
> 
> As the mmc docs sucks, I think it's important that we elaborate a bit
> more on the constraints this has on the host driver, here in the
> changelog.
> 
> Something along the lines, "Using MMC_CAP_DIRECT_COMPLETE requires the
> host driver, when calling mmc_request_done(), to cope with that its
> ->post_req() callback may be called immediately from the same context,
> etc.."

Yes, I will expand it.  It is also a stepping stone to getting to support
for issuing the next request from the ->done() callback.

> 
> Otherwise this looks good to me.
> 
> Kind regards
> Uffe
> 
>>
>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
>> ---
>>  drivers/mmc/core/block.c | 100 +++++++++++++++++++++++++++++++++++++++++------
>>  drivers/mmc/core/block.h |   1 +
>>  drivers/mmc/core/queue.c |   5 ++-
>>  drivers/mmc/core/queue.h |   6 +++
>>  include/linux/mmc/host.h |   1 +
>>  5 files changed, 101 insertions(+), 12 deletions(-)
>>
>> diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
>> index e8be17152884..cbb4b35a592d 100644
>> --- a/drivers/mmc/core/block.c
>> +++ b/drivers/mmc/core/block.c
>> @@ -2086,6 +2086,22 @@ static void mmc_blk_rw_recovery(struct mmc_queue *mq, struct request *req)
>>         }
>>  }
>>
>> +static inline bool mmc_blk_rq_error(struct mmc_blk_request *brq)
>> +{
>> +       mmc_blk_eval_resp_error(brq);
>> +
>> +       return brq->sbc.error || brq->cmd.error || brq->stop.error ||
>> +              brq->data.error || brq->cmd.resp[0] & CMD_ERRORS;
>> +}
>> +
>> +static inline void mmc_blk_rw_reset_success(struct mmc_queue *mq,
>> +                                           struct request *req)
>> +{
>> +       int type = rq_data_dir(req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE;
>> +
>> +       mmc_blk_reset_success(mq->blkdata, type);
>> +}
>> +
>>  static void mmc_blk_mq_complete_rq(struct mmc_queue *mq, struct request *req)
>>  {
>>         struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req);
>> @@ -2167,14 +2183,43 @@ static void mmc_blk_mq_post_req(struct mmc_queue *mq, struct request *req)
>>         if (host->ops->post_req)
>>                 host->ops->post_req(host, mrq, 0);
>>
>> -       blk_mq_complete_request(req);
>> +       /*
>> +        * Block layer timeouts race with completions which means the normal
>> +        * completion path cannot be used during recovery.
>> +        */
>> +       if (mq->in_recovery)
>> +               mmc_blk_mq_complete_rq(mq, req);
>> +       else
>> +               blk_mq_complete_request(req);
>>
>>         mmc_blk_mq_acct_req_done(mq, req);
>>  }
>>
>> +void mmc_blk_mq_recovery(struct mmc_queue *mq)
>> +{
>> +       struct request *req = mq->recovery_req;
>> +       struct mmc_host *host = mq->card->host;
>> +       struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req);
>> +
>> +       mq->recovery_req = NULL;
>> +       mq->rw_wait = false;
>> +
>> +       if (mmc_blk_rq_error(&mqrq->brq)) {
>> +               mmc_retune_hold_now(host);
>> +               mmc_blk_rw_recovery(mq, req);
>> +       }
>> +
>> +       mmc_blk_urgent_bkops(mq, mqrq);
>> +
>> +       mmc_blk_mq_post_req(mq, req);
>> +}
>> +
>>  static void mmc_blk_mq_complete_prev_req(struct mmc_queue *mq,
>>                                          struct request **prev_req)
>>  {
>> +       if (mmc_queue_direct_complete(mq->card->host))
>> +               return;
>> +
>>         mutex_lock(&mq->complete_lock);
>>
>>         if (!mq->complete_req)
>> @@ -2208,19 +2253,43 @@ static void mmc_blk_mq_req_done(struct mmc_request *mrq)
>>         struct request *req = mmc_queue_req_to_req(mqrq);
>>         struct request_queue *q = req->q;
>>         struct mmc_queue *mq = q->queuedata;
>> +       struct mmc_host *host = mq->card->host;
>>         unsigned long flags;
>> -       bool waiting;
>>
>> -       spin_lock_irqsave(q->queue_lock, flags);
>> -       mq->complete_req = req;
>> -       mq->rw_wait = false;
>> -       waiting = mq->waiting;
>> -       spin_unlock_irqrestore(q->queue_lock, flags);
>> +       if (!mmc_queue_direct_complete(host)) {
>> +               bool waiting;
>> +
>> +               spin_lock_irqsave(q->queue_lock, flags);
>> +               mq->complete_req = req;
>> +               mq->rw_wait = false;
>> +               waiting = mq->waiting;
>> +               spin_unlock_irqrestore(q->queue_lock, flags);
>> +
>> +               if (waiting)
>> +                       wake_up(&mq->wait);
>> +               else
>> +                       kblockd_schedule_work(&mq->complete_work);
>>
>> -       if (waiting)
>> +               return;
>> +       }
>> +
>> +       if (mmc_blk_rq_error(&mqrq->brq) ||
>> +           mmc_blk_urgent_bkops_needed(mq, mqrq)) {
>> +               spin_lock_irqsave(q->queue_lock, flags);
>> +               mq->recovery_needed = true;
>> +               mq->recovery_req = req;
>> +               spin_unlock_irqrestore(q->queue_lock, flags);
>>                 wake_up(&mq->wait);
>> -       else
>> -               kblockd_schedule_work(&mq->complete_work);
>> +               schedule_work(&mq->recovery_work);
>> +               return;
>> +       }
>> +
>> +       mmc_blk_rw_reset_success(mq, req);
>> +
>> +       mq->rw_wait = false;
>> +       wake_up(&mq->wait);
>> +
>> +       mmc_blk_mq_post_req(mq, req);
>>  }
>>
>>  static bool mmc_blk_rw_wait_cond(struct mmc_queue *mq, int *err)
>> @@ -2230,7 +2299,12 @@ static bool mmc_blk_rw_wait_cond(struct mmc_queue *mq, int *err)
>>         bool done;
>>
>>         spin_lock_irqsave(q->queue_lock, flags);
>> -       done = !mq->rw_wait;
>> +       if (mq->recovery_needed) {
>> +               *err = -EBUSY;
>> +               done = true;
>> +       } else {
>> +               done = !mq->rw_wait;
>> +       }
>>         mq->waiting = !done;
>>         spin_unlock_irqrestore(q->queue_lock, flags);
>>
>> @@ -2277,6 +2351,10 @@ static int mmc_blk_mq_issue_rw_rq(struct mmc_queue *mq,
>>         if (err)
>>                 mq->rw_wait = false;
>>
>> +       /* Release re-tuning here where there is no synchronization required */
>> +       if (mmc_queue_direct_complete(host))
>> +               mmc_retune_release(host);
>> +
>>  out_post_req:
>>         if (err && host->ops->post_req)
>>                 host->ops->post_req(host, &mqrq->brq.mrq, err);
>> diff --git a/drivers/mmc/core/block.h b/drivers/mmc/core/block.h
>> index 6c0e98c1af71..5ad22c1c0318 100644
>> --- a/drivers/mmc/core/block.h
>> +++ b/drivers/mmc/core/block.h
>> @@ -12,6 +12,7 @@
>>
>>  enum mmc_issued mmc_blk_mq_issue_rq(struct mmc_queue *mq, struct request *req);
>>  void mmc_blk_mq_complete(struct request *req);
>> +void mmc_blk_mq_recovery(struct mmc_queue *mq);
>>
>>  struct work_struct;
>>
>> diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
>> index 971f97698866..bcba2995c767 100644
>> --- a/drivers/mmc/core/queue.c
>> +++ b/drivers/mmc/core/queue.c
>> @@ -165,7 +165,10 @@ static void mmc_mq_recovery_handler(struct work_struct *work)
>>
>>         mq->in_recovery = true;
>>
>> -       mmc_blk_cqe_recovery(mq);
>> +       if (mq->use_cqe)
>> +               mmc_blk_cqe_recovery(mq);
>> +       else
>> +               mmc_blk_mq_recovery(mq);
>>
>>         mq->in_recovery = false;
>>
>> diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
>> index f05b5a9d2f87..9bbfbb1fad7b 100644
>> --- a/drivers/mmc/core/queue.h
>> +++ b/drivers/mmc/core/queue.h
>> @@ -102,6 +102,7 @@ struct mmc_queue {
>>         bool                    waiting;
>>         struct work_struct      recovery_work;
>>         wait_queue_head_t       wait;
>> +       struct request          *recovery_req;
>>         struct request          *complete_req;
>>         struct mutex            complete_lock;
>>         struct work_struct      complete_work;
>> @@ -133,4 +134,9 @@ static inline int mmc_cqe_qcnt(struct mmc_queue *mq)
>>                mq->in_flight[MMC_ISSUE_ASYNC];
>>  }
>>
>> +static inline bool mmc_queue_direct_complete(struct mmc_host *host)
>> +{
>> +       return host->caps & MMC_CAP_DIRECT_COMPLETE;
>> +}
>> +
>>  #endif
>> diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
>> index ce2075d6f429..4b68a95a8818 100644
>> --- a/include/linux/mmc/host.h
>> +++ b/include/linux/mmc/host.h
>> @@ -324,6 +324,7 @@ struct mmc_host {
>>  #define MMC_CAP_DRIVER_TYPE_A  (1 << 23)       /* Host supports Driver Type A */
>>  #define MMC_CAP_DRIVER_TYPE_C  (1 << 24)       /* Host supports Driver Type C */
>>  #define MMC_CAP_DRIVER_TYPE_D  (1 << 25)       /* Host supports Driver Type D */
>> +#define MMC_CAP_DIRECT_COMPLETE        (1 << 27)       /* RW reqs can be completed within mmc_request_done() */
>>  #define MMC_CAP_CD_WAKE                (1 << 28)       /* Enable card detect wake */
>>  #define MMC_CAP_CMD_DURING_TFR (1 << 29)       /* Commands during data transfer */
>>  #define MMC_CAP_CMD23          (1 << 30)       /* CMD23 supported. */
>> --
>> 1.9.1
>>
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Adrian Hunter Nov. 9, 2017, 3:33 p.m. UTC | #6
On 09/11/17 14:34, Linus Walleij wrote:
> On Thu, Nov 9, 2017 at 8:27 AM, Adrian Hunter <adrian.hunter@intel.com> wrote:
>> On 08/11/17 11:28, Linus Walleij wrote:
>>> On Fri, Nov 3, 2017 at 2:20 PM, Adrian Hunter <adrian.hunter@intel.com> wrote:
>>>
>>>> For blk-mq, add support for completing requests directly in the ->done
>>>> callback. That means that error handling and urgent background operations
>>>> must be handled by recovery_work in that case.
>>>>
>>>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
>>>
>>> I tried enabling this on my MMC host (mmci) but I got weird
>>> DMA error messages when I did.
>>>
>>> I guess this has not been tested on a non-DMA-coherent
>>> system?
>>
>> I don't see what DMA-coherence has to do with anything.
>>
>> Possibilities:
>>         - DMA unmapping doesn't work in an atomic context
>>         - requests' DMA operations have to be synchronized with each other
> 
> So since MMCI need the post_req() hook called with
> an error code to properly tear down any DMA operations,
> I was worried that maybe your error path is not doing this
> (passing an error code or calling in the right order).
> 
> I had a bunch of fallouts in my own patch set relating
> to that.
> 
>>> I think I might be seeing this because the .pre and .post
>>> callbacks need to be strictly sequenced, and this is
>>> maybe not taken into account here?
>>
>> I looked at mmci but that did not seem to be the case.
>>
>>> Isn't there as risk
>>> that the .post callback of the next request is called before
>>> the .post callback of the previous request has returned
>>> for example?
>>
>> Of course, the requests are treated as independent.  If the separate DMA
>> operations require synchronization, that is for the host driver to fix.
> 
> They are treated as independent by the block layer but
> it is the subsystems duty to serialize them for the hardware,
> 
> MMCI strictly requires that pre/post hooks per request
> happen in the right order, so if you have prepared a second
> request after submitting the first, and the first fails, you have
> to back out by unpreparing the second one before unpreparing
> the first. It is also the only host driver requireing to be passed
> an error code in the last parameter to the post hook in
> order to work properly.
> 
> I think your patch set handles that nicely though, because I
> haven't seen any errors, it's just when we do this direct
> completion I see problems.

If a request gets an error, then we always do the post_req before starting
another request, so the driver can assume that the first request finished
successfully if it is asked to do post_req on the second request.  So the
driver does have enough information to keep the DMA unmapping in order if
necessary.
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index e8be17152884..cbb4b35a592d 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -2086,6 +2086,22 @@  static void mmc_blk_rw_recovery(struct mmc_queue *mq, struct request *req)
 	}
 }
 
+static inline bool mmc_blk_rq_error(struct mmc_blk_request *brq)
+{
+	mmc_blk_eval_resp_error(brq);
+
+	return brq->sbc.error || brq->cmd.error || brq->stop.error ||
+	       brq->data.error || brq->cmd.resp[0] & CMD_ERRORS;
+}
+
+static inline void mmc_blk_rw_reset_success(struct mmc_queue *mq,
+					    struct request *req)
+{
+	int type = rq_data_dir(req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE;
+
+	mmc_blk_reset_success(mq->blkdata, type);
+}
+
 static void mmc_blk_mq_complete_rq(struct mmc_queue *mq, struct request *req)
 {
 	struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req);
@@ -2167,14 +2183,43 @@  static void mmc_blk_mq_post_req(struct mmc_queue *mq, struct request *req)
 	if (host->ops->post_req)
 		host->ops->post_req(host, mrq, 0);
 
-	blk_mq_complete_request(req);
+	/*
+	 * Block layer timeouts race with completions which means the normal
+	 * completion path cannot be used during recovery.
+	 */
+	if (mq->in_recovery)
+		mmc_blk_mq_complete_rq(mq, req);
+	else
+		blk_mq_complete_request(req);
 
 	mmc_blk_mq_acct_req_done(mq, req);
 }
 
+void mmc_blk_mq_recovery(struct mmc_queue *mq)
+{
+	struct request *req = mq->recovery_req;
+	struct mmc_host *host = mq->card->host;
+	struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req);
+
+	mq->recovery_req = NULL;
+	mq->rw_wait = false;
+
+	if (mmc_blk_rq_error(&mqrq->brq)) {
+		mmc_retune_hold_now(host);
+		mmc_blk_rw_recovery(mq, req);
+	}
+
+	mmc_blk_urgent_bkops(mq, mqrq);
+
+	mmc_blk_mq_post_req(mq, req);
+}
+
 static void mmc_blk_mq_complete_prev_req(struct mmc_queue *mq,
 					 struct request **prev_req)
 {
+	if (mmc_queue_direct_complete(mq->card->host))
+		return;
+
 	mutex_lock(&mq->complete_lock);
 
 	if (!mq->complete_req)
@@ -2208,19 +2253,43 @@  static void mmc_blk_mq_req_done(struct mmc_request *mrq)
 	struct request *req = mmc_queue_req_to_req(mqrq);
 	struct request_queue *q = req->q;
 	struct mmc_queue *mq = q->queuedata;
+	struct mmc_host *host = mq->card->host;
 	unsigned long flags;
-	bool waiting;
 
-	spin_lock_irqsave(q->queue_lock, flags);
-	mq->complete_req = req;
-	mq->rw_wait = false;
-	waiting = mq->waiting;
-	spin_unlock_irqrestore(q->queue_lock, flags);
+	if (!mmc_queue_direct_complete(host)) {
+		bool waiting;
+
+		spin_lock_irqsave(q->queue_lock, flags);
+		mq->complete_req = req;
+		mq->rw_wait = false;
+		waiting = mq->waiting;
+		spin_unlock_irqrestore(q->queue_lock, flags);
+
+		if (waiting)
+			wake_up(&mq->wait);
+		else
+			kblockd_schedule_work(&mq->complete_work);
 
-	if (waiting)
+		return;
+	}
+
+	if (mmc_blk_rq_error(&mqrq->brq) ||
+	    mmc_blk_urgent_bkops_needed(mq, mqrq)) {
+		spin_lock_irqsave(q->queue_lock, flags);
+		mq->recovery_needed = true;
+		mq->recovery_req = req;
+		spin_unlock_irqrestore(q->queue_lock, flags);
 		wake_up(&mq->wait);
-	else
-		kblockd_schedule_work(&mq->complete_work);
+		schedule_work(&mq->recovery_work);
+		return;
+	}
+
+	mmc_blk_rw_reset_success(mq, req);
+
+	mq->rw_wait = false;
+	wake_up(&mq->wait);
+
+	mmc_blk_mq_post_req(mq, req);
 }
 
 static bool mmc_blk_rw_wait_cond(struct mmc_queue *mq, int *err)
@@ -2230,7 +2299,12 @@  static bool mmc_blk_rw_wait_cond(struct mmc_queue *mq, int *err)
 	bool done;
 
 	spin_lock_irqsave(q->queue_lock, flags);
-	done = !mq->rw_wait;
+	if (mq->recovery_needed) {
+		*err = -EBUSY;
+		done = true;
+	} else {
+		done = !mq->rw_wait;
+	}
 	mq->waiting = !done;
 	spin_unlock_irqrestore(q->queue_lock, flags);
 
@@ -2277,6 +2351,10 @@  static int mmc_blk_mq_issue_rw_rq(struct mmc_queue *mq,
 	if (err)
 		mq->rw_wait = false;
 
+	/* Release re-tuning here where there is no synchronization required */
+	if (mmc_queue_direct_complete(host))
+		mmc_retune_release(host);
+
 out_post_req:
 	if (err && host->ops->post_req)
 		host->ops->post_req(host, &mqrq->brq.mrq, err);
diff --git a/drivers/mmc/core/block.h b/drivers/mmc/core/block.h
index 6c0e98c1af71..5ad22c1c0318 100644
--- a/drivers/mmc/core/block.h
+++ b/drivers/mmc/core/block.h
@@ -12,6 +12,7 @@ 
 
 enum mmc_issued mmc_blk_mq_issue_rq(struct mmc_queue *mq, struct request *req);
 void mmc_blk_mq_complete(struct request *req);
+void mmc_blk_mq_recovery(struct mmc_queue *mq);
 
 struct work_struct;
 
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 971f97698866..bcba2995c767 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -165,7 +165,10 @@  static void mmc_mq_recovery_handler(struct work_struct *work)
 
 	mq->in_recovery = true;
 
-	mmc_blk_cqe_recovery(mq);
+	if (mq->use_cqe)
+		mmc_blk_cqe_recovery(mq);
+	else
+		mmc_blk_mq_recovery(mq);
 
 	mq->in_recovery = false;
 
diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
index f05b5a9d2f87..9bbfbb1fad7b 100644
--- a/drivers/mmc/core/queue.h
+++ b/drivers/mmc/core/queue.h
@@ -102,6 +102,7 @@  struct mmc_queue {
 	bool			waiting;
 	struct work_struct	recovery_work;
 	wait_queue_head_t	wait;
+	struct request		*recovery_req;
 	struct request		*complete_req;
 	struct mutex		complete_lock;
 	struct work_struct	complete_work;
@@ -133,4 +134,9 @@  static inline int mmc_cqe_qcnt(struct mmc_queue *mq)
 	       mq->in_flight[MMC_ISSUE_ASYNC];
 }
 
+static inline bool mmc_queue_direct_complete(struct mmc_host *host)
+{
+	return host->caps & MMC_CAP_DIRECT_COMPLETE;
+}
+
 #endif
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index ce2075d6f429..4b68a95a8818 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -324,6 +324,7 @@  struct mmc_host {
 #define MMC_CAP_DRIVER_TYPE_A	(1 << 23)	/* Host supports Driver Type A */
 #define MMC_CAP_DRIVER_TYPE_C	(1 << 24)	/* Host supports Driver Type C */
 #define MMC_CAP_DRIVER_TYPE_D	(1 << 25)	/* Host supports Driver Type D */
+#define MMC_CAP_DIRECT_COMPLETE	(1 << 27)	/* RW reqs can be completed within mmc_request_done() */
 #define MMC_CAP_CD_WAKE		(1 << 28)	/* Enable card detect wake */
 #define MMC_CAP_CMD_DURING_TFR	(1 << 29)	/* Commands during data transfer */
 #define MMC_CAP_CMD23		(1 << 30)	/* CMD23 supported. */