From patchwork Thu Apr 13 22:05:34 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 9680463 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7F05960326 for ; Fri, 14 Apr 2017 00:59:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 70401286AA for ; Fri, 14 Apr 2017 00:59:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 64684286AD; Fri, 14 Apr 2017 00:59:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id BF671286AA for ; Fri, 14 Apr 2017 00:59:09 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C01DF6E94F; Fri, 14 Apr 2017 00:58:56 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from ale.deltatee.com (ale.deltatee.com [207.54.116.67]) by gabe.freedesktop.org (Postfix) with ESMTPS id 252036E94F; Thu, 13 Apr 2017 22:41:17 +0000 (UTC) Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.84_2) (envelope-from ) id 1cymsK-0003tX-3Y; Thu, 13 Apr 2017 16:06:06 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.84_2) (envelope-from ) id 1cymsH-0001CG-Sc; Thu, 13 Apr 2017 16:05:57 -0600 From: Logan Gunthorpe To: Christoph Hellwig , "Martin K. Petersen" , Sagi Grimberg , Jens Axboe , Tejun Heo , Greg Kroah-Hartman , Dan Williams , Ross Zwisler , Matthew Wilcox , Sumit Semwal , Ming Lin , linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, intel-gfx@lists.freedesktop.org, linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org, linux-nvme@lists.infradead.org, linux-nvdimm@lists.01.org, linux-scsi@vger.kernel.org, fcoe-devel@open-fcoe.org, open-iscsi@googlegroups.com, megaraidlinux.pdl@broadcom.com, sparmaintainer@unisys.com, devel@driverdev.osuosl.org, target-devel@vger.kernel.org, netdev@vger.kernel.org, linux-rdma@vger.kernel.org, rds-devel@oss.oracle.com Date: Thu, 13 Apr 2017 16:05:34 -0600 Message-Id: <1492121135-4437-22-git-send-email-logang@deltatee.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1492121135-4437-1-git-send-email-logang@deltatee.com> References: <1492121135-4437-1-git-send-email-logang@deltatee.com> X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: hch@lst.de, martin.petersen@oracle.com, sagi@grimberg.me, axboe@kernel.dk, tj@kernel.org, gregkh@linuxfoundation.org, dan.j.williams@intel.com, ross.zwisler@linux.intel.com, mawilcox@microsoft.com, sumit.semwal@linaro.org, ming.l@ssi.samsung.com, linaro-mm-sig@lists.linaro.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-nvme@lists.infradead.org, linux-nvdimm@lists.01.org, fcoe-devel@open-fcoe.org, open-iscsi@googlegroups.com, megaraidlinux.pdl@broadcom.com, sparmaintainer@unisys.com, devel@driverdev.osuosl.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, linux-media@vger.kernel.org, linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, netdev@vger.kernel.org, linux-rdma@vger.kernel.org, rds-devel@oss.oracle.com, swise@opengridcomputing.com, sbates@raithlin.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH 21/22] mmc: tifm_sd: Make use of the new sg_map helper function X-SA-Exim-Version: 4.2.1 (built Mon, 26 Dec 2011 16:24:06 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) X-Mailman-Approved-At: Fri, 14 Apr 2017 00:58:55 +0000 Cc: Steve Wise , Logan Gunthorpe , Stephen Bates X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP This conversion is a bit complicated. We modiy the read_fifo, write_fifo and copy_page functions to take a scatterlist instead of a page. Thus we can use sg_map instead of kmap_atomic. There's a bit of accounting that needed to be done for the offset for this to work. (Seeing sg_map takes care of the offset but it's already added and used earlier in the code. There's also no error path, so if unmappable memory finds its way into the sgl we can only WARN. Signed-off-by: Logan Gunthorpe --- drivers/mmc/host/tifm_sd.c | 88 +++++++++++++++++++++++++++++++++++----------- 1 file changed, 67 insertions(+), 21 deletions(-) diff --git a/drivers/mmc/host/tifm_sd.c b/drivers/mmc/host/tifm_sd.c index 93c4b40..75b0d74 100644 --- a/drivers/mmc/host/tifm_sd.c +++ b/drivers/mmc/host/tifm_sd.c @@ -111,14 +111,26 @@ struct tifm_sd { }; /* for some reason, host won't respond correctly to readw/writew */ -static void tifm_sd_read_fifo(struct tifm_sd *host, struct page *pg, +static void tifm_sd_read_fifo(struct tifm_sd *host, struct scatterlist *sg, unsigned int off, unsigned int cnt) { struct tifm_dev *sock = host->dev; unsigned char *buf; unsigned int pos = 0, val; - buf = kmap_atomic(pg) + off; + buf = sg_map_offset(sg, off - sg->offset, SG_KMAP_ATOMIC); + if (IS_ERR(buf)) { + /* + * This should really never happen unless + * the code is changed to use memory that is + * not mappable in the sg. Seeing there doesn't + * seem to be any error path out of here, + * we can only WARN. + */ + WARN(1, "Non-mappable memory used in sg!"); + return; + } + if (host->cmd_flags & DATA_CARRY) { buf[pos++] = host->bounce_buf_data[0]; host->cmd_flags &= ~DATA_CARRY; @@ -134,17 +146,29 @@ static void tifm_sd_read_fifo(struct tifm_sd *host, struct page *pg, } buf[pos++] = (val >> 8) & 0xff; } - kunmap_atomic(buf - off); + sg_unmap_offset(sg, buf, off - sg->offset, SG_KMAP_ATOMIC); } -static void tifm_sd_write_fifo(struct tifm_sd *host, struct page *pg, +static void tifm_sd_write_fifo(struct tifm_sd *host, struct scatterlist *sg, unsigned int off, unsigned int cnt) { struct tifm_dev *sock = host->dev; unsigned char *buf; unsigned int pos = 0, val; - buf = kmap_atomic(pg) + off; + buf = sg_map_offset(sg, off - sg->offset, SG_KMAP_ATOMIC); + if (IS_ERR(buf)) { + /* + * This should really never happen unless + * the code is changed to use memory that is + * not mappable in the sg. Seeing there doesn't + * seem to be any error path out of here, + * we can only WARN. + */ + WARN(1, "Non-mappable memory used in sg!"); + return; + } + if (host->cmd_flags & DATA_CARRY) { val = host->bounce_buf_data[0] | ((buf[pos++] << 8) & 0xff00); writel(val, sock->addr + SOCK_MMCSD_DATA); @@ -161,7 +185,7 @@ static void tifm_sd_write_fifo(struct tifm_sd *host, struct page *pg, val |= (buf[pos++] << 8) & 0xff00; writel(val, sock->addr + SOCK_MMCSD_DATA); } - kunmap_atomic(buf - off); + sg_unmap_offset(sg, buf, off - sg->offset, SG_KMAP_ATOMIC); } static void tifm_sd_transfer_data(struct tifm_sd *host) @@ -170,7 +194,6 @@ static void tifm_sd_transfer_data(struct tifm_sd *host) struct scatterlist *sg = r_data->sg; unsigned int off, cnt, t_size = TIFM_MMCSD_FIFO_SIZE * 2; unsigned int p_off, p_cnt; - struct page *pg; if (host->sg_pos == host->sg_len) return; @@ -192,33 +215,57 @@ static void tifm_sd_transfer_data(struct tifm_sd *host) } off = sg[host->sg_pos].offset + host->block_pos; - pg = nth_page(sg_page(&sg[host->sg_pos]), off >> PAGE_SHIFT); p_off = offset_in_page(off); p_cnt = PAGE_SIZE - p_off; p_cnt = min(p_cnt, cnt); p_cnt = min(p_cnt, t_size); if (r_data->flags & MMC_DATA_READ) - tifm_sd_read_fifo(host, pg, p_off, p_cnt); + tifm_sd_read_fifo(host, &sg[host->sg_pos], p_off, + p_cnt); else if (r_data->flags & MMC_DATA_WRITE) - tifm_sd_write_fifo(host, pg, p_off, p_cnt); + tifm_sd_write_fifo(host, &sg[host->sg_pos], p_off, + p_cnt); t_size -= p_cnt; host->block_pos += p_cnt; } } -static void tifm_sd_copy_page(struct page *dst, unsigned int dst_off, - struct page *src, unsigned int src_off, +static void tifm_sd_copy_page(struct scatterlist *dst, unsigned int dst_off, + struct scatterlist *src, unsigned int src_off, unsigned int count) { - unsigned char *src_buf = kmap_atomic(src) + src_off; - unsigned char *dst_buf = kmap_atomic(dst) + dst_off; + unsigned char *src_buf, *dst_buf; + + src_off -= src->offset; + dst_off -= dst->offset; + + src_buf = sg_map_offset(src, src_off, SG_KMAP_ATOMIC); + if (IS_ERR(src_buf)) + goto sg_map_err; + + dst_buf = sg_map_offset(dst, dst_off, SG_KMAP_ATOMIC); + if (IS_ERR(dst_buf)) + goto sg_map_err; memcpy(dst_buf, src_buf, count); - kunmap_atomic(dst_buf - dst_off); - kunmap_atomic(src_buf - src_off); + sg_unmap_offset(dst, dst_buf, dst_off, SG_KMAP_ATOMIC); + sg_unmap_offset(src, src_buf, src_off, SG_KMAP_ATOMIC); + +sg_map_err: + if (!IS_ERR(src_buf)) + sg_unmap_offset(src, src_buf, src_off, SG_KMAP_ATOMIC); + + /* + * This should really never happen unless + * the code is changed to use memory that is + * not mappable in the sg. Seeing there doesn't + * seem to be any error path out of here, + * we can only WARN. + */ + WARN(1, "Non-mappable memory used in sg!"); } static void tifm_sd_bounce_block(struct tifm_sd *host, struct mmc_data *r_data) @@ -227,7 +274,6 @@ static void tifm_sd_bounce_block(struct tifm_sd *host, struct mmc_data *r_data) unsigned int t_size = r_data->blksz; unsigned int off, cnt; unsigned int p_off, p_cnt; - struct page *pg; dev_dbg(&host->dev->dev, "bouncing block\n"); while (t_size) { @@ -241,18 +287,18 @@ static void tifm_sd_bounce_block(struct tifm_sd *host, struct mmc_data *r_data) } off = sg[host->sg_pos].offset + host->block_pos; - pg = nth_page(sg_page(&sg[host->sg_pos]), off >> PAGE_SHIFT); p_off = offset_in_page(off); p_cnt = PAGE_SIZE - p_off; p_cnt = min(p_cnt, cnt); p_cnt = min(p_cnt, t_size); if (r_data->flags & MMC_DATA_WRITE) - tifm_sd_copy_page(sg_page(&host->bounce_buf), + tifm_sd_copy_page(&host->bounce_buf, r_data->blksz - t_size, - pg, p_off, p_cnt); + &sg[host->sg_pos], p_off, p_cnt); else if (r_data->flags & MMC_DATA_READ) - tifm_sd_copy_page(pg, p_off, sg_page(&host->bounce_buf), + tifm_sd_copy_page(&sg[host->sg_pos], p_off, + &host->bounce_buf, r_data->blksz - t_size, p_cnt); t_size -= p_cnt;