From patchwork Wed Jun 15 14:06:28 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guennadi Liakhovetski X-Patchwork-Id: 882172 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter2.kernel.org (8.14.4/8.14.4) with ESMTP id p5FE6c7F007226 for ; Wed, 15 Jun 2011 14:06:39 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754878Ab1FOOGi (ORCPT ); Wed, 15 Jun 2011 10:06:38 -0400 Received: from moutng.kundenserver.de ([212.227.17.10]:56565 "EHLO moutng.kundenserver.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754820Ab1FOOGh (ORCPT ); Wed, 15 Jun 2011 10:06:37 -0400 Received: from axis700.grange (dslb-094-221-122-162.pools.arcor-ip.net [94.221.122.162]) by mrelayeu.kundenserver.de (node=mrbap2) with ESMTP (Nemesis) id 0MKcX1-1QYm0n2rcF-001ajh; Wed, 15 Jun 2011 16:06:31 +0200 Received: by axis700.grange (Postfix, from userid 1000) id 0B0E4189B83; Wed, 15 Jun 2011 16:06:29 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by axis700.grange (Postfix) with ESMTP id F3C55189B82; Wed, 15 Jun 2011 16:06:28 +0200 (CEST) Date: Wed, 15 Jun 2011 16:06:28 +0200 (CEST) From: Guennadi Liakhovetski X-X-Sender: lyakh@axis700.grange To: linux-mmc@vger.kernel.org cc: linux-sh@vger.kernel.org, Ian Molton , Kuninori Morimoto , Chris Ball , Magnus Damm Subject: [PATCH] mmc: tmio: fix recursive spinlock, don't schedule with interrupts disabled Message-ID: MIME-Version: 1.0 X-Provags-ID: V02:K0:q03HjOgCbLbVBGpfFXqskybFQdhTxs9evxu4ldz4Han cIRNmAxetJvIiq5X7WKv5x+yEe/8oebWq5pV0t+KpaWN/kGuur I/47uMyRxhnZv9/Vvb6WncJY//t/rweXd8Vdqb96Fk2MmB2BOx iVCbV5gQF9fXJFLcbFAHSPag6rlB5rZ8+CQUWWgypYbN9/+dPS 1YVcPldf2dcoMtbHA1dqekj6iFWK/5FiocDLiebYgM= Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter2.kernel.org [140.211.167.43]); Wed, 15 Jun 2011 14:06:39 +0000 (UTC) Calling mmc_request_done() under a spinlock with interrupts disabled leads to a recursive spin-lock on request retry path and to scheduling in atomic context. This patch fixes both these problems by moving mmc_request_done() to the scheduler workqueue. Signed-off-by: Guennadi Liakhovetski --- Morimoto-san, this should fix the race in TMIO / SDHI driver, that you reported here: http://article.gmane.org/gmane.linux.ports.sh.devel/11349 Please, verify. drivers/mmc/host/tmio_mmc.h | 2 ++ drivers/mmc/host/tmio_mmc_pio.c | 14 +++++++++++--- 2 files changed, 13 insertions(+), 3 deletions(-) diff --git a/drivers/mmc/host/tmio_mmc.h b/drivers/mmc/host/tmio_mmc.h index 8260bc2..b4dfc13 100644 --- a/drivers/mmc/host/tmio_mmc.h +++ b/drivers/mmc/host/tmio_mmc.h @@ -73,6 +73,8 @@ struct tmio_mmc_host { /* Track lost interrupts */ struct delayed_work delayed_reset_work; + struct work_struct done; + /* protect host private data */ spinlock_t lock; unsigned long last_req_ts; }; diff --git a/drivers/mmc/host/tmio_mmc_pio.c b/drivers/mmc/host/tmio_mmc_pio.c index ad6347b..1b63045 100644 --- a/drivers/mmc/host/tmio_mmc_pio.c +++ b/drivers/mmc/host/tmio_mmc_pio.c @@ -297,10 +297,16 @@ static void tmio_mmc_finish_request(struct tmio_mmc_host *host) host->mrq = NULL; - /* FIXME: mmc_request_done() can schedule! */ mmc_request_done(host->mmc, mrq); } +static void tmio_mmc_done_work(struct work_struct *work) +{ + struct tmio_mmc_host *host = container_of(work, struct tmio_mmc_host, + done); + tmio_mmc_finish_request(host); +} + /* These are the bitmasks the tmio chip requires to implement the MMC response * types. Note that R1 and R6 are the same in this scheme. */ #define APP_CMD 0x0040 @@ -467,7 +473,7 @@ void tmio_mmc_do_data_irq(struct tmio_mmc_host *host) BUG(); } - tmio_mmc_finish_request(host); + schedule_work(&host->done); } static void tmio_mmc_data_irq(struct tmio_mmc_host *host) @@ -557,7 +563,7 @@ static void tmio_mmc_cmd_irq(struct tmio_mmc_host *host, tasklet_schedule(&host->dma_issue); } } else { - tmio_mmc_finish_request(host); + schedule_work(&host->done); } out: @@ -916,6 +922,7 @@ int __devinit tmio_mmc_host_probe(struct tmio_mmc_host **host, /* Init delayed work for request timeouts */ INIT_DELAYED_WORK(&_host->delayed_reset_work, tmio_mmc_reset_work); + INIT_WORK(&_host->done, tmio_mmc_done_work); /* See if we also get DMA */ tmio_mmc_request_dma(_host, pdata); @@ -963,6 +970,7 @@ void tmio_mmc_host_remove(struct tmio_mmc_host *host) pm_runtime_get_sync(&pdev->dev); mmc_remove_host(host->mmc); + cancel_work_sync(&host->done); cancel_delayed_work_sync(&host->delayed_reset_work); tmio_mmc_release_dma(host);