From patchwork Fri Jan 27 14:04:54 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 9541935 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8844060415 for ; Fri, 27 Jan 2017 14:14:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7A72D2074F for ; Fri, 27 Jan 2017 14:14:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6F2D027F07; Fri, 27 Jan 2017 14:14:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 31ADA2074F for ; Fri, 27 Jan 2017 14:14:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754746AbdA0ON2 (ORCPT ); Fri, 27 Jan 2017 09:13:28 -0500 Received: from mail-lf0-f41.google.com ([209.85.215.41]:35291 "EHLO mail-lf0-f41.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754546AbdA0ONP (ORCPT ); Fri, 27 Jan 2017 09:13:15 -0500 Received: by mail-lf0-f41.google.com with SMTP id n124so162166168lfd.2 for ; Fri, 27 Jan 2017 06:12:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=7Y+q3djCguQjt+X5MqOofxdUCTNMh2eM+in5UOxckUE=; b=JFR3TXLWiy3KLBGChfxaozPU9SuDOeRaZID2dzAldcSMESnI1Gsjc3sBPCrXGGUKCL E73CSRaIj32NqbnaiT9aqexB8ilDhZfL+JfUFYm4pGaf2hziPbzhUUjrypI9k3197xPk TEfxzJb1VrIYO7b4Ai+7FgPa+jjZu4d/AWdjk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=7Y+q3djCguQjt+X5MqOofxdUCTNMh2eM+in5UOxckUE=; b=uCp1lCOtyi2MWwjkX+ZNVsbFNWZWkpiRNnHLhGo741VC+l9Y5jmMnMjNcYC1emU/Oj fHWNwY0Is93dOD2bQHyX9AIPyM+OpG66trtPMUeEIKRUEt6mgxb0PFEzPL9OKPCUL4ME mxhJ7R+QUSvcOL1ZupkXnUeZKFlDukVOggn1bEvuacmFGL+wNt3xg9YaMJPqSlSWTRC/ 2B/5igw68nTBA3bM39nFXc6IzTWDB9uiG6k4boTPAIlIyciydzjc3bwQstiFsCTu4ck6 ooqh4c2fJomtdTDtc6fwLB+VIfp1TFOgggqIZXwC2tIYdg5YWB+6ltI43wOhF2kMeVSc GScg== X-Gm-Message-State: AIkVDXIv8IT1iYTXLfecH1JBZSOpYRGgwLbDdT0KkZdNGUDFeClQOA6PykYR1ZnpeoDzabDt X-Received: by 10.25.135.130 with SMTP id j124mr2779688lfd.11.1485525904717; Fri, 27 Jan 2017 06:05:04 -0800 (PST) Received: from gnarp.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id e18sm1325276lfg.22.2017.01.27.06.05.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 27 Jan 2017 06:05:03 -0800 (PST) From: Linus Walleij To: Ulf Hansson , linux-mmc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Srinivas Kandagatla Cc: Russell King , Linus Walleij Subject: [PATCH] mmc: core/mmci: restore pre/post_req behaviour Date: Fri, 27 Jan 2017 15:04:54 +0100 Message-Id: <20170127140454.7990-1-linus.walleij@linaro.org> X-Mailer: git-send-email 2.9.3 Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP commit 64b12a68a9f74bb32d8efd7af1ad8a2ba02fc884 "mmc: core: fix prepared requests while doing bkops" is fixing a bug in the wrong way. A bug in the MMCI device driver is fixed by amending the MMC core. Thinking about it: what the pre- and post-callbacks are doing is to essentially map and unmap SG lists for DMA transfers. Why would we not be able to do that just because a BKOPS command is sent inbetween? Having to unprepare/prepare the next asynchronous request for DMA seems wrong. Looking the backtrace in that commit we can see what the real problem actually is: mmci_data_irq() is calling mmci_dma_unmap() twice which is goung to call arm_dma_unmap_sg() twice and v7_dma_inv_range() twice for the same sglist and that will crash. This happens because a request is prepared, then a BKOPS is sent. The IRQ completing the BKOPS command goes through mmci_data_irq() and thinks that a DMA operation has just been completed because dma_inprogress() reports true. It then proceeds to unmap the sglist. But that was wrong! dma_inprogress() should NOT be true because no DMA was actually in progress! We had just prepared the sglist, and the DMA channel dma_current has been configured, but NOT started! Because of this, the sglist is already unmapped when we get our actual data completion IRQ, and we are unmapping the sglist once more, and we get this crash. Therefore, we need to revert this solution pushing the problem to the core and causing problems, and instead augment the implementation such that dma_inprogress() only reports true if some DMA has actually been started. After this we can keep the request prepared during the BKOPS and we need not unprepare/reprepare it. Fixes: 64b12a68a9f7 ("mmc: core: fix prepared requests while doing bkops") Cc: Srinivas Kandagatla Signed-off-by: Linus Walleij Tested-by: Srinivas Kandagatla --- This was found when trying to refactor the stack to complete requests from the mmc_request_done() call: pretty tricky to do this when you don't have the previous and next-to-run asynchronous request available. Luckily I think it is just a bug fix done the wrong way. I will build further patches on this one. --- drivers/mmc/core/core.c | 9 --------- drivers/mmc/host/mmci.c | 7 ++++++- drivers/mmc/host/mmci.h | 3 ++- 3 files changed, 8 insertions(+), 11 deletions(-) diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index ed1768cf464a..a160c3a7777a 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -676,16 +676,7 @@ struct mmc_async_req *mmc_start_areq(struct mmc_host *host, ((mmc_resp_type(host->areq->mrq->cmd) == MMC_RSP_R1) || (mmc_resp_type(host->areq->mrq->cmd) == MMC_RSP_R1B)) && (host->areq->mrq->cmd->resp[0] & R1_EXCEPTION_EVENT)) { - - /* Cancel the prepared request */ - if (areq) - mmc_post_req(host, areq->mrq, -EINVAL); - mmc_start_bkops(host->card, true); - - /* prepare the request again */ - if (areq) - mmc_pre_req(host, areq->mrq); } } diff --git a/drivers/mmc/host/mmci.c b/drivers/mmc/host/mmci.c index 01a804792f30..bc5fd04acd0f 100644 --- a/drivers/mmc/host/mmci.c +++ b/drivers/mmc/host/mmci.c @@ -507,6 +507,7 @@ static void mmci_dma_data_error(struct mmci_host *host) { dev_err(mmc_dev(host->mmc), "error during DMA transfer!\n"); dmaengine_terminate_all(host->dma_current); + host->dma_in_progress = false; host->dma_current = NULL; host->dma_desc_current = NULL; host->data->host_cookie = 0; @@ -565,6 +566,7 @@ static void mmci_dma_finalize(struct mmci_host *host, struct mmc_data *data) mmci_dma_release(host); } + host->dma_in_progress = false; host->dma_current = NULL; host->dma_desc_current = NULL; } @@ -665,6 +667,7 @@ static int mmci_dma_start_data(struct mmci_host *host, unsigned int datactrl) dev_vdbg(mmc_dev(host->mmc), "Submit MMCI DMA job, sglen %d blksz %04x blks %04x flags %08x\n", data->sg_len, data->blksz, data->blocks, data->flags); + host->dma_in_progress = true; dmaengine_submit(host->dma_desc_current); dma_async_issue_pending(host->dma_current); @@ -740,8 +743,10 @@ static void mmci_post_request(struct mmc_host *mmc, struct mmc_request *mrq, if (host->dma_desc_current == next->dma_desc) host->dma_desc_current = NULL; - if (host->dma_current == next->dma_chan) + if (host->dma_current == next->dma_chan) { + host->dma_in_progress = false; host->dma_current = NULL; + } next->dma_desc = NULL; next->dma_chan = NULL; diff --git a/drivers/mmc/host/mmci.h b/drivers/mmc/host/mmci.h index 56322c6afba4..4a8bef1aac8f 100644 --- a/drivers/mmc/host/mmci.h +++ b/drivers/mmc/host/mmci.h @@ -245,8 +245,9 @@ struct mmci_host { struct dma_chan *dma_tx_channel; struct dma_async_tx_descriptor *dma_desc_current; struct mmci_host_next next_data; + bool dma_in_progress; -#define dma_inprogress(host) ((host)->dma_current) +#define dma_inprogress(host) ((host)->dma_in_progress) #else #define dma_inprogress(host) (0) #endif