From patchwork Tue Feb 12 07:25:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10807429 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A491414E1 for ; Tue, 12 Feb 2019 07:26:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 89C4D2915C for ; Tue, 12 Feb 2019 07:26:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7D48D2A3CA; Tue, 12 Feb 2019 07:26:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2CC292915C for ; Tue, 12 Feb 2019 07:26:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=h2v1N9mU1RwRTY1eByLo8vfCXpGzzoZpMFPMYVo49CY=; b=mdJd0Te3op66Z3 OG76j2KgOZ37+DCJ4M22YXMNR/XaouYSJEPtdpMTSpzY+mZnQ7nMM+6mUtmUNdVynX2fQQvdCe+mT Aa9zQBxQmzxJyA+pnHEpweGBBB2PIIAmDif0qA1vEHv90C8/N8s43RmXvq79PkbEYeMPlOwuVyv9g 1A3Q1wwyYdurIta1P1WSXbcKugEt+VhPFRT4xX9Z9ywu8oVSdc8A8qVBeG6c/UWVt8iodia/1Ht44 8d8bQNgKnZcyA8S8NUY13ce43KXTvg7+BTTQDXFZSgG5Ho3eSW/YsGuodW8vd8RIisA2Ntn65gHkJ qITVqkA29GQsLMpHZ1iw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gtSSI-0002y1-FB; Tue, 12 Feb 2019 07:26:10 +0000 Received: from 089144210182.atnat0019.highway.a1.net ([89.144.210.182] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1gtSRm-0002P6-HK; Tue, 12 Feb 2019 07:25:38 +0000 From: Christoph Hellwig To: Ulf Hansson Subject: [PATCH 03/14] mmc: add a need_kmap flag to struct mmc_host Date: Tue, 12 Feb 2019 08:25:17 +0100 Message-Id: <20190212072528.13167-4-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190212072528.13167-1-hch@lst.de> References: <20190212072528.13167-1-hch@lst.de> MIME-Version: 1.0 X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Aaro Koskinen , Nicolas Pitre , linux-mmc@vger.kernel.org, Russell King , linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, Ben Dooks , linux-omap@vger.kernel.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP If we want to get rid of the block layer bounce buffering for highmem we need to ensure no segment spans multiple pages so that we can kmap it. Add a flag to struct mmc_host so that we can handle the block and DMA layer interactions in common code. Signed-off-by: Christoph Hellwig --- drivers/mmc/core/queue.c | 13 +++++++++++++ include/linux/mmc/host.h | 1 + 2 files changed, 14 insertions(+) diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index 35cc138b096d..71cd2411329e 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -370,6 +370,19 @@ static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card) blk_queue_max_segments(mq->queue, host->max_segs); blk_queue_max_segment_size(mq->queue, host->max_seg_size); + /* + * If the host requires kmapping for PIO we need to ensure + * that no segment spans a page boundary. + */ + if (host->need_kmap) { + unsigned int dma_boundary = host->max_seg_size - 1; + + if (dma_boundary >= PAGE_SIZE) + dma_boundary = PAGE_SIZE - 1; + blk_queue_segment_boundary(mq->queue, dma_boundary); + dma_set_seg_boundary(mmc_dev(host), dma_boundary); + } + INIT_WORK(&mq->recovery_work, mmc_mq_recovery_handler); INIT_WORK(&mq->complete_work, mmc_blk_mq_complete_work); diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index 4eadf01b4a93..87f8a89d2f70 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -397,6 +397,7 @@ struct mmc_host { unsigned int doing_retune:1; /* re-tuning in progress */ unsigned int retune_now:1; /* do re-tuning at next req */ unsigned int retune_paused:1; /* re-tuning is temporarily disabled */ + unsigned int need_kmap:1; /* only allow single page segments */ int rescan_disable; /* disable card detection */ int rescan_entered; /* used with nonremovable devices */