From patchwork Tue Nov 29 10:09:14 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrian Hunter X-Patchwork-Id: 9451383 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3E0EE60756 for ; Tue, 29 Nov 2016 10:15:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2EA682810E for ; Tue, 29 Nov 2016 10:15:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 234452811D; Tue, 29 Nov 2016 10:15:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A524528111 for ; Tue, 29 Nov 2016 10:15:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757186AbcK2KPD (ORCPT ); Tue, 29 Nov 2016 05:15:03 -0500 Received: from mga03.intel.com ([134.134.136.65]:48807 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757012AbcK2KO7 (ORCPT ); Tue, 29 Nov 2016 05:14:59 -0500 Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga103.jf.intel.com with ESMTP; 29 Nov 2016 02:14:59 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,568,1473145200"; d="scan'208";a="196888059" Received: from ahunter-desktop.fi.intel.com ([10.237.72.168]) by fmsmga004.fm.intel.com with ESMTP; 29 Nov 2016 02:14:55 -0800 From: Adrian Hunter To: Ulf Hansson Cc: linux-mmc , Alex Lemberg , Mateusz Nowak , Yuliy Izrailov , Jaehoon Chung , Dong Aisheng , Das Asutosh , Zhangfei Gao , Dorfman Konstantin , David Griego , Sahitya Tummala , Harjani Ritesh , Venu Byravarasu , Linus Walleij Subject: [PATCH V8 07/20] mmc: queue: Factor out mmc_queue_reqs_free_bufs() Date: Tue, 29 Nov 2016 12:09:14 +0200 Message-Id: <1480414167-15577-8-git-send-email-adrian.hunter@intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1480414167-15577-1-git-send-email-adrian.hunter@intel.com> References: <1480414167-15577-1-git-send-email-adrian.hunter@intel.com> Organization: Intel Finland Oy, Registered Address: PL 281, 00181 Helsinki, Business Identity Code: 0357606 - 4, Domiciled in Helsinki Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In preparation for supporting a queue of requests, factor out mmc_queue_reqs_free_bufs(). Signed-off-by: Adrian Hunter Reviewed-by: Linus Walleij Reviewed-by: Harjani Ritesh --- drivers/mmc/card/queue.c | 65 +++++++++++++++++++----------------------------- 1 file changed, 26 insertions(+), 39 deletions(-) diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c index 0a8480f5ea4b..7234cc48097e 100644 --- a/drivers/mmc/card/queue.c +++ b/drivers/mmc/card/queue.c @@ -250,6 +250,27 @@ static int mmc_queue_alloc_sgs(struct mmc_queue *mq, int max_segs) return ret; } +static void mmc_queue_reqs_free_bufs(struct mmc_queue *mq) +{ + struct mmc_queue_req *mqrq_cur = mq->mqrq_cur; + struct mmc_queue_req *mqrq_prev = mq->mqrq_prev; + + kfree(mqrq_cur->bounce_sg); + mqrq_cur->bounce_sg = NULL; + kfree(mqrq_prev->bounce_sg); + mqrq_prev->bounce_sg = NULL; + + kfree(mqrq_cur->sg); + mqrq_cur->sg = NULL; + kfree(mqrq_cur->bounce_buf); + mqrq_cur->bounce_buf = NULL; + + kfree(mqrq_prev->sg); + mqrq_prev->sg = NULL; + kfree(mqrq_prev->bounce_buf); + mqrq_prev->bounce_buf = NULL; +} + /** * mmc_init_queue - initialise a queue structure. * @mq: mmc queue @@ -266,8 +287,6 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, u64 limit = BLK_BOUNCE_HIGH; bool bounce = false; int ret; - struct mmc_queue_req *mqrq_cur = &mq->mqrq[0]; - struct mmc_queue_req *mqrq_prev = &mq->mqrq[1]; if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask) limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT; @@ -277,8 +296,8 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, if (!mq->queue) return -ENOMEM; - mq->mqrq_cur = mqrq_cur; - mq->mqrq_prev = mqrq_prev; + mq->mqrq_cur = &mq->mqrq[0]; + mq->mqrq_prev = &mq->mqrq[1]; mq->queue->queuedata = mq; blk_queue_prep_rq(mq->queue, mmc_prep_request); @@ -334,27 +353,13 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, if (IS_ERR(mq->thread)) { ret = PTR_ERR(mq->thread); - goto free_bounce_sg; + goto cleanup_queue; } return 0; - free_bounce_sg: - kfree(mqrq_cur->bounce_sg); - mqrq_cur->bounce_sg = NULL; - kfree(mqrq_prev->bounce_sg); - mqrq_prev->bounce_sg = NULL; cleanup_queue: - kfree(mqrq_cur->sg); - mqrq_cur->sg = NULL; - kfree(mqrq_cur->bounce_buf); - mqrq_cur->bounce_buf = NULL; - - kfree(mqrq_prev->sg); - mqrq_prev->sg = NULL; - kfree(mqrq_prev->bounce_buf); - mqrq_prev->bounce_buf = NULL; - + mmc_queue_reqs_free_bufs(mq); blk_cleanup_queue(mq->queue); return ret; } @@ -363,8 +368,6 @@ void mmc_cleanup_queue(struct mmc_queue *mq) { struct request_queue *q = mq->queue; unsigned long flags; - struct mmc_queue_req *mqrq_cur = mq->mqrq_cur; - struct mmc_queue_req *mqrq_prev = mq->mqrq_prev; /* Make sure the queue isn't suspended, as that will deadlock */ mmc_queue_resume(mq); @@ -378,23 +381,7 @@ void mmc_cleanup_queue(struct mmc_queue *mq) blk_start_queue(q); spin_unlock_irqrestore(q->queue_lock, flags); - kfree(mqrq_cur->bounce_sg); - mqrq_cur->bounce_sg = NULL; - - kfree(mqrq_cur->sg); - mqrq_cur->sg = NULL; - - kfree(mqrq_cur->bounce_buf); - mqrq_cur->bounce_buf = NULL; - - kfree(mqrq_prev->bounce_sg); - mqrq_prev->bounce_sg = NULL; - - kfree(mqrq_prev->sg); - mqrq_prev->sg = NULL; - - kfree(mqrq_prev->bounce_buf); - mqrq_prev->bounce_buf = NULL; + mmc_queue_reqs_free_bufs(mq); mq->card = NULL; }