From patchwork Thu Jun 9 11:52:39 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrian Hunter X-Patchwork-Id: 9166881 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6649260467 for ; Thu, 9 Jun 2016 11:59:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 56DC420855 for ; Thu, 9 Jun 2016 11:59:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4B69628342; Thu, 9 Jun 2016 11:59:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DFDB420855 for ; Thu, 9 Jun 2016 11:59:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751327AbcFIL7Y (ORCPT ); Thu, 9 Jun 2016 07:59:24 -0400 Received: from mga01.intel.com ([192.55.52.88]:48207 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750917AbcFIL7X (ORCPT ); Thu, 9 Jun 2016 07:59:23 -0400 Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP; 09 Jun 2016 04:59:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,444,1459839600"; d="scan'208";a="998504358" Received: from ahunter-desktop.fi.intel.com ([10.237.72.168]) by fmsmga002.fm.intel.com with ESMTP; 09 Jun 2016 04:59:19 -0700 From: Adrian Hunter To: Ulf Hansson Cc: linux-mmc , Alex Lemberg , Mateusz Nowak , Yuliy Izrailov , Jaehoon Chung , Dong Aisheng , Das Asutosh , Zhangfei Gao , Sujit Reddy Thumma , Dorfman Konstantin , David Griego , Sahitya Tummala , Harjani Ritesh Subject: [PATCH RFC 39/46] mmc: block: Use local var for mqrq_cur Date: Thu, 9 Jun 2016 14:52:39 +0300 Message-Id: <1465473166-22532-40-git-send-email-adrian.hunter@intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1465473166-22532-1-git-send-email-adrian.hunter@intel.com> References: <1465473166-22532-1-git-send-email-adrian.hunter@intel.com> Organization: Intel Finland Oy, Registered Address: PL 281, 00181 Helsinki, Business Identity Code: 0357606 - 4, Domiciled in Helsinki Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP A subsequent patch will remove 'mq->mqrq_cur'. Prepare for that by assigning it to a local variable. Signed-off-by: Adrian Hunter --- drivers/mmc/card/block.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c index b42318ce0ad7..db28274a7624 100644 --- a/drivers/mmc/card/block.c +++ b/drivers/mmc/card/block.c @@ -1988,6 +1988,7 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc) struct mmc_blk_request *brq; int ret = 1, disable_multi = 0, retry = 0, type, retune_retry_done = 0; enum mmc_blk_status status; + struct mmc_queue_req *mqrq_cur = mq->mqrq_cur; struct mmc_queue_req *mq_rq; struct request *req; struct mmc_async_req *areq; @@ -2010,18 +2011,18 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc) !IS_ALIGNED(blk_rq_sectors(rqc), 8)) { pr_err("%s: Transfer size is not 4KB sector size aligned\n", rqc->rq_disk->disk_name); - mq_rq = mq->mqrq_cur; + mq_rq = mqrq_cur; req = rqc; rqc = NULL; goto cmd_abort; } if (reqs >= packed_nr) - mmc_blk_packed_hdr_wrq_prep(mq->mqrq_cur, + mmc_blk_packed_hdr_wrq_prep(mqrq_cur, card, mq); else - mmc_blk_rw_rq_prep(mq->mqrq_cur, card, 0, mq); - areq = &mq->mqrq_cur->mmc_active; + mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq); + areq = &mqrq_cur->mmc_active; } else areq = NULL; areq = mmc_start_req(card->host, areq, (int *) &status); @@ -2163,12 +2164,12 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc) /* * If current request is packed, it needs to put back. */ - if (mmc_packed_cmd(mq->mqrq_cur->cmd_type)) - mmc_blk_revert_packed_req(mq, mq->mqrq_cur); + if (mmc_packed_cmd(mqrq_cur->cmd_type)) + mmc_blk_revert_packed_req(mq, mqrq_cur); - mmc_blk_rw_rq_prep(mq->mqrq_cur, card, 0, mq); + mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq); mmc_start_req(card->host, - &mq->mqrq_cur->mmc_active, NULL); + &mqrq_cur->mmc_active, NULL); } }