diff mbox series

mmc: sdio: Use mmc_pre_req() / mmc_post_req()

Message ID 20200903082007.18715-1-adrian.hunter@intel.com (mailing list archive)
State New, archived
Headers show
Series mmc: sdio: Use mmc_pre_req() / mmc_post_req() | expand

Commit Message

Adrian Hunter Sept. 3, 2020, 8:20 a.m. UTC
SDHCI changed from using a tasklet to finish requests, to using an IRQ
thread i.e. commit c07a48c2651965 ("mmc: sdhci: Remove finish_tasklet").
Because this increased the latency to complete requests, a preparatory
change was made to complete the request from the IRQ handler if
possible i.e. commit 19d2f695f4e827 ("mmc: sdhci: Call mmc_request_done()
from IRQ handler if possible").  That alleviated the situation for MMC
block devices because the MMC block driver makes use of mmc_pre_req()
and mmc_post_req() so that successful requests are completed in the IRQ
handler and any DMA unmapping is handled separately in mmc_post_req().
However SDIO was still affected, and an example has been reported with
up to 20% degradation in performance.

Looking at SDIO I/O helper functions, sdio_io_rw_ext_helper() appeared
to be a possible candidate for making use of asynchronous requests
within its I/O loops, but analysis revealed that these loops almost
never iterate more than once, so the complexity of the change would not
be warrented.

Instead, mmc_pre_req() and mmc_post_req() are added before and after I/O
submission (mmc_wait_for_req) in mmc_io_rw_extended().  This still has
the potential benefit of reducing the duration of interrupt handlers, as
well as addressing the latency issue for SDHCI.  It also seems a more
reasonable solution than forcing drivers to do everything in the IRQ
handler.

Reported-by: Dmitry Osipenko <digetx@gmail.com>
Fixes: c07a48c2651965 ("mmc: sdhci: Remove finish_tasklet")
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Dmitry Osipenko <digetx@gmail.com>
---
 drivers/mmc/core/sdio_ops.c | 39 +++++++++++++++++++++----------------
 1 file changed, 22 insertions(+), 17 deletions(-)

Comments

Ulf Hansson Sept. 3, 2020, 8:34 a.m. UTC | #1
On Thu, 3 Sep 2020 at 10:20, Adrian Hunter <adrian.hunter@intel.com> wrote:
>
> SDHCI changed from using a tasklet to finish requests, to using an IRQ
> thread i.e. commit c07a48c2651965 ("mmc: sdhci: Remove finish_tasklet").
> Because this increased the latency to complete requests, a preparatory
> change was made to complete the request from the IRQ handler if
> possible i.e. commit 19d2f695f4e827 ("mmc: sdhci: Call mmc_request_done()
> from IRQ handler if possible").  That alleviated the situation for MMC
> block devices because the MMC block driver makes use of mmc_pre_req()
> and mmc_post_req() so that successful requests are completed in the IRQ
> handler and any DMA unmapping is handled separately in mmc_post_req().
> However SDIO was still affected, and an example has been reported with
> up to 20% degradation in performance.
>
> Looking at SDIO I/O helper functions, sdio_io_rw_ext_helper() appeared
> to be a possible candidate for making use of asynchronous requests
> within its I/O loops, but analysis revealed that these loops almost
> never iterate more than once, so the complexity of the change would not
> be warrented.
>
> Instead, mmc_pre_req() and mmc_post_req() are added before and after I/O
> submission (mmc_wait_for_req) in mmc_io_rw_extended().  This still has
> the potential benefit of reducing the duration of interrupt handlers, as
> well as addressing the latency issue for SDHCI.  It also seems a more
> reasonable solution than forcing drivers to do everything in the IRQ
> handler.

Briljant!

So, this should mean that other host drivers that use threaded IRQ
handlers could benefit as well. It would certainly be interesting to
hear from other tests about this.

>
> Reported-by: Dmitry Osipenko <digetx@gmail.com>
> Fixes: c07a48c2651965 ("mmc: sdhci: Remove finish_tasklet")
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> Tested-by: Dmitry Osipenko <digetx@gmail.com>

Applied for fixes and by adding a stable tag, thanks!

Kind regards
Uffe


> ---
>  drivers/mmc/core/sdio_ops.c | 39 +++++++++++++++++++++----------------
>  1 file changed, 22 insertions(+), 17 deletions(-)
>
> diff --git a/drivers/mmc/core/sdio_ops.c b/drivers/mmc/core/sdio_ops.c
> index 93d346c01110..4c229dd2b6e5 100644
> --- a/drivers/mmc/core/sdio_ops.c
> +++ b/drivers/mmc/core/sdio_ops.c
> @@ -121,6 +121,7 @@ int mmc_io_rw_extended(struct mmc_card *card, int write, unsigned fn,
>         struct sg_table sgtable;
>         unsigned int nents, left_size, i;
>         unsigned int seg_size = card->host->max_seg_size;
> +       int err;
>
>         WARN_ON(blksz == 0);
>
> @@ -170,28 +171,32 @@ int mmc_io_rw_extended(struct mmc_card *card, int write, unsigned fn,
>
>         mmc_set_data_timeout(&data, card);
>
> -       mmc_wait_for_req(card->host, &mrq);
> +       mmc_pre_req(card->host, &mrq);
>
> -       if (nents > 1)
> -               sg_free_table(&sgtable);
> +       mmc_wait_for_req(card->host, &mrq);
>
>         if (cmd.error)
> -               return cmd.error;
> -       if (data.error)
> -               return data.error;
> -
> -       if (mmc_host_is_spi(card->host)) {
> +               err = cmd.error;
> +       else if (data.error)
> +               err = data.error;
> +       else if (mmc_host_is_spi(card->host))
>                 /* host driver already reported errors */
> -       } else {
> -               if (cmd.resp[0] & R5_ERROR)
> -                       return -EIO;
> -               if (cmd.resp[0] & R5_FUNCTION_NUMBER)
> -                       return -EINVAL;
> -               if (cmd.resp[0] & R5_OUT_OF_RANGE)
> -                       return -ERANGE;
> -       }
> +               err = 0;
> +       else if (cmd.resp[0] & R5_ERROR)
> +               err = -EIO;
> +       else if (cmd.resp[0] & R5_FUNCTION_NUMBER)
> +               err = -EINVAL;
> +       else if (cmd.resp[0] & R5_OUT_OF_RANGE)
> +               err = -ERANGE;
> +       else
> +               err = 0;
>
> -       return 0;
> +       mmc_post_req(card->host, &mrq, err);
> +
> +       if (nents > 1)
> +               sg_free_table(&sgtable);
> +
> +       return err;
>  }
>
>  int sdio_reset(struct mmc_host *host)
> --
> 2.17.1
>
diff mbox series

Patch

diff --git a/drivers/mmc/core/sdio_ops.c b/drivers/mmc/core/sdio_ops.c
index 93d346c01110..4c229dd2b6e5 100644
--- a/drivers/mmc/core/sdio_ops.c
+++ b/drivers/mmc/core/sdio_ops.c
@@ -121,6 +121,7 @@  int mmc_io_rw_extended(struct mmc_card *card, int write, unsigned fn,
 	struct sg_table sgtable;
 	unsigned int nents, left_size, i;
 	unsigned int seg_size = card->host->max_seg_size;
+	int err;
 
 	WARN_ON(blksz == 0);
 
@@ -170,28 +171,32 @@  int mmc_io_rw_extended(struct mmc_card *card, int write, unsigned fn,
 
 	mmc_set_data_timeout(&data, card);
 
-	mmc_wait_for_req(card->host, &mrq);
+	mmc_pre_req(card->host, &mrq);
 
-	if (nents > 1)
-		sg_free_table(&sgtable);
+	mmc_wait_for_req(card->host, &mrq);
 
 	if (cmd.error)
-		return cmd.error;
-	if (data.error)
-		return data.error;
-
-	if (mmc_host_is_spi(card->host)) {
+		err = cmd.error;
+	else if (data.error)
+		err = data.error;
+	else if (mmc_host_is_spi(card->host))
 		/* host driver already reported errors */
-	} else {
-		if (cmd.resp[0] & R5_ERROR)
-			return -EIO;
-		if (cmd.resp[0] & R5_FUNCTION_NUMBER)
-			return -EINVAL;
-		if (cmd.resp[0] & R5_OUT_OF_RANGE)
-			return -ERANGE;
-	}
+		err = 0;
+	else if (cmd.resp[0] & R5_ERROR)
+		err = -EIO;
+	else if (cmd.resp[0] & R5_FUNCTION_NUMBER)
+		err = -EINVAL;
+	else if (cmd.resp[0] & R5_OUT_OF_RANGE)
+		err = -ERANGE;
+	else
+		err = 0;
 
-	return 0;
+	mmc_post_req(card->host, &mrq, err);
+
+	if (nents > 1)
+		sg_free_table(&sgtable);
+
+	return err;
 }
 
 int sdio_reset(struct mmc_host *host)