[7/7] mmc: stop using block layer bounce buffers
diff mbox

Message ID 20180518171847.16419-8-hch@lst.de
State New
Headers show

Commit Message

Christoph Hellwig May 18, 2018, 5:18 p.m. UTC
If a driver uses the dma API (as indicated by a device with a dma mask)
we can rely on the dma mapping API to do any required bounce buffering,
and all drivers using bounce buffering or pio now either use the proper
highmem-aware accessors or depend on !HIGHMEM.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/mmc/core/queue.c | 5 -----
 1 file changed, 5 deletions(-)

Comments

Ulf Hansson May 21, 2018, 10:55 a.m. UTC | #1
On 18 May 2018 at 19:18, Christoph Hellwig <hch@lst.de> wrote:
> If a driver uses the dma API (as indicated by a device with a dma mask)
> we can rely on the dma mapping API to do any required bounce buffering,
> and all drivers using bounce buffering or pio now either use the proper
> highmem-aware accessors or depend on !HIGHMEM.

Considered that we have a few other mmc host drivers to convert to the
highmem accessors, I need to postpone this one until all have been
fixed, right?

Well, unless the rest uses the !HIGHMEM depend option!?

Kind regards
Uffe

>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/mmc/core/queue.c | 5 -----
>  1 file changed, 5 deletions(-)
>
> diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
> index 56e9a803db21..a18541930c01 100644
> --- a/drivers/mmc/core/queue.c
> +++ b/drivers/mmc/core/queue.c
> @@ -351,17 +351,12 @@ static const struct blk_mq_ops mmc_mq_ops = {
>  static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
>  {
>         struct mmc_host *host = card->host;
> -       u64 limit = BLK_BOUNCE_HIGH;
> -
> -       if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
> -               limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT;
>
>         blk_queue_flag_set(QUEUE_FLAG_NONROT, mq->queue);
>         blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, mq->queue);
>         if (mmc_can_erase(card))
>                 mmc_queue_setup_discard(mq->queue, card);
>
> -       blk_queue_bounce_limit(mq->queue, limit);
>         blk_queue_max_hw_sectors(mq->queue,
>                 min(host->max_blk_count, host->max_req_size / 512));
>         blk_queue_max_segments(mq->queue, host->max_segs);
> --
> 2.17.0
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Christoph Hellwig May 21, 2018, 3:38 p.m. UTC | #2
On Mon, May 21, 2018 at 12:55:10PM +0200, Ulf Hansson wrote:
> On 18 May 2018 at 19:18, Christoph Hellwig <hch@lst.de> wrote:
> > If a driver uses the dma API (as indicated by a device with a dma mask)
> > we can rely on the dma mapping API to do any required bounce buffering,
> > and all drivers using bounce buffering or pio now either use the proper
> > highmem-aware accessors or depend on !HIGHMEM.
> 
> Considered that we have a few other mmc host drivers to convert to the
> highmem accessors, I need to postpone this one until all have been
> fixed, right?

Yes.

> Well, unless the rest uses the !HIGHMEM depend option!?

I had patches for that in my tree, but dropped them for now.  I should
have dropped this patch as well with them.
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Patch
diff mbox

diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 56e9a803db21..a18541930c01 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -351,17 +351,12 @@  static const struct blk_mq_ops mmc_mq_ops = {
 static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
 {
 	struct mmc_host *host = card->host;
-	u64 limit = BLK_BOUNCE_HIGH;
-
-	if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
-		limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT;
 
 	blk_queue_flag_set(QUEUE_FLAG_NONROT, mq->queue);
 	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, mq->queue);
 	if (mmc_can_erase(card))
 		mmc_queue_setup_discard(mq->queue, card);
 
-	blk_queue_bounce_limit(mq->queue, limit);
 	blk_queue_max_hw_sectors(mq->queue,
 		min(host->max_blk_count, host->max_req_size / 512));
 	blk_queue_max_segments(mq->queue, host->max_segs);