From patchwork Wed May 10 08:24:17 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 9719427 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8E92860365 for ; Wed, 10 May 2017 08:25:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7EA78284E3 for ; Wed, 10 May 2017 08:25:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 735CD284ED; Wed, 10 May 2017 08:25:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 59160284E4 for ; Wed, 10 May 2017 08:25:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752278AbdEJIZO (ORCPT ); Wed, 10 May 2017 04:25:14 -0400 Received: from mail-qt0-f180.google.com ([209.85.216.180]:32894 "EHLO mail-qt0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752613AbdEJIZM (ORCPT ); Wed, 10 May 2017 04:25:12 -0400 Received: by mail-qt0-f180.google.com with SMTP id t26so20408413qtg.0 for ; Wed, 10 May 2017 01:25:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=UMtKkJZUGL+7ICWLSZ3seGBxnIPSbPlXxAfdmv8DDF0=; b=CxE0t5dMmyzn3eoApq43L6dBpBYMG3/CObKdQDeaxADJlJ4hXkfOEKKaRxGwxH1sAi xRqCt883Kv5fzeXktBnn6ivkMNl7T/uApRNwIfnrhG0lUStFfd8NAiKZkpupgfy4IvkB Y5bRrNe2KYnRvF/rscLiWzictxEe5iPK2xl+Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=UMtKkJZUGL+7ICWLSZ3seGBxnIPSbPlXxAfdmv8DDF0=; b=FZY+LeNpXEt8DLeFFuWO6cPtnv1WWxZ9uCTwZBZslGH7BEMOOisLJNfJ0+yUSR8lLv XrDFhNzXjZs3liA831jVcQCHroLqoMfNUOlD9A1q3H+bf7l+28+WrNlSDz7xDkwn+1hv OF3ZV3ycbzgfI/bXMS3O91WpyHGL5nBjRM6d1o9xOxz9XEzjUuuDC4iUGaGhU1hjCEzr Xs3yrdfC0uJHcSngo2w1gu60LglNZQG8nBNZsAAqUM/EotVk+J3sazrgeIsrkaLNHoI8 gxUr4iXU7Nb3lC/2TQs5HqbIzKhdH36q9QBxH22E6jcfdP/k/TVCEIeg69SiCz6OZtlO rZbg== X-Gm-Message-State: AODbwcCv0eVHdMYuCgqiPVoCL8vZ6piUbCehbN65u6mbIn4rVJWoZAJr cmJHpCwDUrXQ5Syh X-Received: by 10.46.21.92 with SMTP id 28mr2110251ljv.98.1494404711769; Wed, 10 May 2017 01:25:11 -0700 (PDT) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id 17sm401500ljo.56.2017.05.10.01.25.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 10 May 2017 01:25:10 -0700 (PDT) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson , Adrian Hunter Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Linus Walleij Subject: [PATCH 4/5] mmc: block: move single ioctl() commands to block requests Date: Wed, 10 May 2017 10:24:17 +0200 Message-Id: <20170510082418.10513-5-linus.walleij@linaro.org> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170510082418.10513-1-linus.walleij@linaro.org> References: <20170510082418.10513-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This wraps single ioctl() commands into block requests using the custom block layer request types REQ_OP_DRV_IN and REQ_OP_DRV_OUT. By doing this we are loosening the grip on the big host lock, since two calls to mmc_get_card()/mmc_put_card() are removed. We are storing the ioctl() in/out argument as a pointer in the per-request struct mmc_blk_request container. Since we now let the block layer allocate this data, blk_get_request() will allocate it for us and we can immediately dereference it and use it to pass the argument into the block layer. Tested on the ux500 with the userspace: mmc extcsd read /dev/mmcblk3 resulting in a successful EXTCSD info dump back to the console. Signed-off-by: Linus Walleij tested-by: Avri Altman --- drivers/mmc/core/block.c | 56 ++++++++++++++++++++++++++++++++++++++---------- drivers/mmc/core/queue.h | 3 +++ 2 files changed, 48 insertions(+), 11 deletions(-) diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 323f3790b629..640db4f57a31 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -564,8 +564,10 @@ static int mmc_blk_ioctl_cmd(struct block_device *bdev, { struct mmc_blk_ioc_data *idata; struct mmc_blk_data *md; + struct mmc_queue *mq; struct mmc_card *card; int err = 0, ioc_err = 0; + struct request *req; /* * The caller must have CAP_SYS_RAWIO, and must be calling this on the @@ -591,17 +593,18 @@ static int mmc_blk_ioctl_cmd(struct block_device *bdev, goto cmd_done; } - mmc_get_card(card); - - ioc_err = __mmc_blk_ioctl_cmd(card, md, idata); - - /* Always switch back to main area after RPMB access */ - if (md->area_type & MMC_BLK_DATA_AREA_RPMB) - mmc_blk_part_switch(card, dev_get_drvdata(&card->dev)); - - mmc_put_card(card); - + /* + * Dispatch the ioctl() into the block request queue. + */ + mq = &md->queue; + req = blk_get_request(mq->queue, + idata->ic.write_flag ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN, + __GFP_RECLAIM); + req_to_mq_rq(req)->idata = idata; + blk_execute_rq(mq->queue, NULL, req, 0); + ioc_err = req_to_mq_rq(req)->ioc_result; err = mmc_blk_ioctl_copy_to_user(ic_ptr, idata); + blk_put_request(req); cmd_done: mmc_blk_put(md); @@ -611,6 +614,31 @@ static int mmc_blk_ioctl_cmd(struct block_device *bdev, return ioc_err ? ioc_err : err; } +/* + * The ioctl commands come back from the block layer after it queued it and + * processed it with all other requests and then they get issued in this + * function. + */ +static void mmc_blk_ioctl_cmd_issue(struct mmc_queue *mq, struct request *req) +{ + struct mmc_queue_req *mq_rq; + struct mmc_blk_ioc_data *idata; + struct mmc_card *card = mq->card; + struct mmc_blk_data *md = mq->blkdata; + int ioc_err; + + mq_rq = req_to_mq_rq(req); + idata = mq_rq->idata; + ioc_err = __mmc_blk_ioctl_cmd(card, md, idata); + mq_rq->ioc_result = ioc_err; + + /* Always switch back to main area after RPMB access */ + if (md->area_type & MMC_BLK_DATA_AREA_RPMB) + mmc_blk_part_switch(card, dev_get_drvdata(&card->dev)); + + blk_end_request_all(req, ioc_err); +} + static int mmc_blk_ioctl_multi_cmd(struct block_device *bdev, struct mmc_ioc_multi_cmd __user *user) { @@ -1854,7 +1882,13 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) goto out; } - if (req && req_op(req) == REQ_OP_DISCARD) { + if (req && + (req_op(req) == REQ_OP_DRV_IN || req_op(req) == REQ_OP_DRV_OUT)) { + /* complete ongoing async transfer before issuing ioctl()s */ + if (mq->qcnt) + mmc_blk_issue_rw_rq(mq, NULL); + mmc_blk_ioctl_cmd_issue(mq, req); + } else if (req && req_op(req) == REQ_OP_DISCARD) { /* complete ongoing async transfer before issuing discard */ if (mq->qcnt) mmc_blk_issue_rw_rq(mq, NULL); diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h index 8aa10ffdf622..aeb3408dc85e 100644 --- a/drivers/mmc/core/queue.h +++ b/drivers/mmc/core/queue.h @@ -22,6 +22,7 @@ static inline bool mmc_req_is_special(struct request *req) struct task_struct; struct mmc_blk_data; +struct mmc_blk_ioc_data; struct mmc_blk_request { struct mmc_request mrq; @@ -40,6 +41,8 @@ struct mmc_queue_req { struct scatterlist *bounce_sg; unsigned int bounce_sg_len; struct mmc_async_req areq; + int ioc_result; + struct mmc_blk_ioc_data *idata; }; struct mmc_queue {