From patchwork Mon Oct 18 13:55:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 12566563 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F7ADC4332F for ; Mon, 18 Oct 2021 13:58:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2C515610E8 for ; Mon, 18 Oct 2021 13:58:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234600AbhJROAk (ORCPT ); Mon, 18 Oct 2021 10:00:40 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:38726 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234719AbhJRN6h (ORCPT ); Mon, 18 Oct 2021 09:58:37 -0400 From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634565384; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UAapEy9YV68gpvvIuJBv9mDji+Afov+c7/4D1dx5Wow=; b=mIvXHdED577fcCOw12dEbFJnsn2Kf1zCO7QkjFph79WqpAjnfqy0ikn9K2u6uYyBo2FfAq JO00KuhW7Z85Ie2wI5SrNozXfvDnofuPnbp1qckgRgMc2PaZQPv3FvbmVEUfHdZJ1h/QJO ncgIAT8PGo1Fo1RlSB4oVXPRu5Lq9owyC+RIzcQnYDBMkkQcL9LT1WOoFp1X0KXCg93kzv EdWbxEeDyclvnekG87xjbRKP4Gb93/S4FcRWiQSC0eCNvD8Uubf8GVEOYPOV8p5pI8ElB6 hapERUHDZcGG8RVztokfaA6mV9AbMy8OEap8BGfjzoNNAzjpEsdJ0G6HBBVueA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634565384; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UAapEy9YV68gpvvIuJBv9mDji+Afov+c7/4D1dx5Wow=; b=4wdtSV1PXeczY/kLT/2UCZxfMlmFyI+oc5a94yCPCVHbhYDVxlL6t3QAGnoOfGDFv0OtLA 4Teo5m+zVqQqsDCQ== To: linux-block@vger.kernel.org, linux-mmc@vger.kernel.org Cc: tglx@linutronix.de, Ulf Hansson , Jens Axboe , Christoph Hellwig , Sebastian Andrzej Siewior Subject: [PATCH v2 1/2] blk-mq: Add blk_mq_complete_request_direct() Date: Mon, 18 Oct 2021 15:55:58 +0200 Message-Id: <20211018135559.244400-2-bigeasy@linutronix.de> In-Reply-To: <20211018135559.244400-1-bigeasy@linutronix.de> References: <20211018135559.244400-1-bigeasy@linutronix.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Add blk_mq_complete_request_direct() which completes the block request directly instead deferring it to softirq for single queue devices. This is useful for devices which complete the requests in preemptible context and raising softirq from means scheduling ksoftirqd. Signed-off-by: Sebastian Andrzej Siewior --- include/linux/blk-mq.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 13ba1861e688f..93780c890b479 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -521,6 +521,17 @@ static inline void blk_mq_set_request_complete(struct request *rq) WRITE_ONCE(rq->state, MQ_RQ_COMPLETE); } +/* + * Complete the request directly instead of deferring it to softirq or + * completing it another CPU. Useful in preemptible instead of an interrupt. + */ +static inline void blk_mq_complete_request_direct(struct request *rq, + void (*complete)(struct request *rq)) +{ + WRITE_ONCE(rq->state, MQ_RQ_COMPLETE); + complete(rq); +} + void blk_mq_start_request(struct request *rq); void blk_mq_end_request(struct request *rq, blk_status_t error); void __blk_mq_end_request(struct request *rq, blk_status_t error); From patchwork Mon Oct 18 13:55:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 12566593 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F299DC43217 for ; Mon, 18 Oct 2021 14:15:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E042860F9F for ; Mon, 18 Oct 2021 14:15:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233289AbhJRORc (ORCPT ); Mon, 18 Oct 2021 10:17:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232376AbhJROR3 (ORCPT ); Mon, 18 Oct 2021 10:17:29 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 07268C035458; Mon, 18 Oct 2021 06:56:27 -0700 (PDT) From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634565384; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FbuCZW8tC6lqui4GKA5A6qN/AO3jQkqlVwJsCa0MEiE=; b=qT2k7Sr82VQ/j+wajtD2cTX7/obyF355c7t3mMPtSKlv2+sW5GTiJRtBJutYSUC3hpGuLp ZSZR3YmljNGRRfY2/rmdBlm9zWLaD+kDzWbB343L/b0CZ4MES6fDycnZVmorXfIFx2Gtop tKpvTnLRAdAjzceUBw64EPgOZ2CHux7oYyC65b3Zxwq+Vl9slBwcm4FFnuq6YjA3+UOtoX xS2i17fN8bAtsi0tSMBeTkGrduBONTcxW1qXIz4RAjMXoWY+TgvlUQEbeLUeJ1wfb5sDGi H1mw/4PgmtSghJTJ/P1ApCWl2Ql715oZYY/xvroWwB8pzek2l70wJzghDbg3tQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634565384; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FbuCZW8tC6lqui4GKA5A6qN/AO3jQkqlVwJsCa0MEiE=; b=1fxZFlhr2viQpNxxJBLFG56b/9Nl7gH/dFrCtdg2add+ZojEselG+czI1KBMfm7OvunkGn tOfhhWCD/MNMQbBQ== To: linux-block@vger.kernel.org, linux-mmc@vger.kernel.org Cc: tglx@linutronix.de, Ulf Hansson , Jens Axboe , Christoph Hellwig , Sebastian Andrzej Siewior Subject: [PATCH v2 2/2] mmc: core: Use blk_mq_complete_request_direct(). Date: Mon, 18 Oct 2021 15:55:59 +0200 Message-Id: <20211018135559.244400-3-bigeasy@linutronix.de> In-Reply-To: <20211018135559.244400-1-bigeasy@linutronix.de> References: <20211018135559.244400-1-bigeasy@linutronix.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org The completion callback for the sdhci-pci device is invoked from a kworker. I couldn't identify in which context is mmc_blk_mq_req_done() invoke but the remaining caller are from invoked from preemptible context. Here it would make sense to complete the request directly instead scheduling ksoftirqd for its completion. Signed-off-by: Sebastian Andrzej Siewior Acked-by: Adrian Hunter Reviewed-by: Ulf Hansson --- drivers/mmc/core/block.c | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-) diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 431af5e8be2f8..7d6b43fe52e8a 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -2051,7 +2051,8 @@ static void mmc_blk_mq_dec_in_flight(struct mmc_queue *mq, struct request *req) mmc_put_card(mq->card, &mq->ctx); } -static void mmc_blk_mq_post_req(struct mmc_queue *mq, struct request *req) +static void mmc_blk_mq_post_req(struct mmc_queue *mq, struct request *req, + bool can_sleep) { struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req); struct mmc_request *mrq = &mqrq->brq.mrq; @@ -2063,10 +2064,14 @@ static void mmc_blk_mq_post_req(struct mmc_queue *mq, struct request *req) * Block layer timeouts race with completions which means the normal * completion path cannot be used during recovery. */ - if (mq->in_recovery) + if (mq->in_recovery) { mmc_blk_mq_complete_rq(mq, req); - else if (likely(!blk_should_fake_timeout(req->q))) - blk_mq_complete_request(req); + } else if (likely(!blk_should_fake_timeout(req->q))) { + if (can_sleep) + blk_mq_complete_request_direct(req, mmc_blk_mq_complete); + else + blk_mq_complete_request(req); + } mmc_blk_mq_dec_in_flight(mq, req); } @@ -2087,7 +2092,7 @@ void mmc_blk_mq_recovery(struct mmc_queue *mq) mmc_blk_urgent_bkops(mq, mqrq); - mmc_blk_mq_post_req(mq, req); + mmc_blk_mq_post_req(mq, req, true); } static void mmc_blk_mq_complete_prev_req(struct mmc_queue *mq, @@ -2106,7 +2111,7 @@ static void mmc_blk_mq_complete_prev_req(struct mmc_queue *mq, if (prev_req) *prev_req = mq->complete_req; else - mmc_blk_mq_post_req(mq, mq->complete_req); + mmc_blk_mq_post_req(mq, mq->complete_req, true); mq->complete_req = NULL; @@ -2178,7 +2183,8 @@ static void mmc_blk_mq_req_done(struct mmc_request *mrq) mq->rw_wait = false; wake_up(&mq->wait); - mmc_blk_mq_post_req(mq, req); + /* context unknown */ + mmc_blk_mq_post_req(mq, req, false); } static bool mmc_blk_rw_wait_cond(struct mmc_queue *mq, int *err) @@ -2238,7 +2244,7 @@ static int mmc_blk_mq_issue_rw_rq(struct mmc_queue *mq, err = mmc_start_request(host, &mqrq->brq.mrq); if (prev_req) - mmc_blk_mq_post_req(mq, prev_req); + mmc_blk_mq_post_req(mq, prev_req, true); if (err) mq->rw_wait = false;