From patchwork Mon Mar 16 10:01:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 11440039 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6594F6CA for ; Mon, 16 Mar 2020 10:01:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3D1802071C for ; Mon, 16 Mar 2020 10:01:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BWI3mq9A" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730519AbgCPKBp (ORCPT ); Mon, 16 Mar 2020 06:01:45 -0400 Received: from mail-pj1-f66.google.com ([209.85.216.66]:40176 "EHLO mail-pj1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730518AbgCPKBo (ORCPT ); Mon, 16 Mar 2020 06:01:44 -0400 Received: by mail-pj1-f66.google.com with SMTP id bo3so6950243pjb.5; Mon, 16 Mar 2020 03:01:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=/LmtztrVm5GBvFcccNPu2w9AgpCSWj+xEGNl3taRbcc=; b=BWI3mq9ASVMRk6eA05VPOZ1WrcF1iF2gI4wqFMEyzNL2CsjoEOJGab50BmDSEhbLOs 3evYv35PA2kZnhf/Y4x6UzXwbezX8MI+iPzGfcrfIVyPhC04Nk+GlnxZ4JNybrZTLhoy uOq9bKha9hlX7wTBSs/0RfDu4dA7HlOqXbmWgvw97hUFWwbmcbJ1uDOcJidI5Kt6oRSz xtp+6PbfYroaAi/I+JwsLDg9yygrA1/HbiPZjf9wbvVAJbN5yOV1c2I13FkbxFwRfiah mNChzs3+dclDD1rVjLh0GHtNV5Ge2Sip57OmvLL70FhnwOdsL4x7MQImrffxbPCutYCF LyPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=/LmtztrVm5GBvFcccNPu2w9AgpCSWj+xEGNl3taRbcc=; b=FT+TMzezsCgKkCLu4WEIUf/FjlDbNBMZdBVLg4jAfNam48qP/lb02zZFKu/wfS7dYi vmqLrf3yxystJ2rsy3L+JulRP+MYdmUwr/qx5zEir050hBG9d00pztZw6m4WouWKkCcz sVKjlOZ8jmMb7rmkQpac69ohaWyyOW+1DZvhjvlUoAMigl6h0pUy5F4z/d+Musmu1BTB Umxr5AnI5OjFnDWtw+42Qfqgo3KeBjltI5D06jx72uqGYx4ph67wSzhCT+ekdqc6yfU1 J/sHS7BH6ft8z4eA2Th8y1e16d3GS4pSZbvCgVew1RiGRo8/cPCxAJWXmO/0eQKk/mZK nKUQ== X-Gm-Message-State: ANhLgQ03JpnEFJzvNIz9lQmwdIWwoKKGxQxSg66KGGOoAW3lm63F6uNe owvebJ9XBrSs/MimxtDyK4E= X-Google-Smtp-Source: ADFU+vsQNWdN6ESJ56bAGgS0HoNlD5Fs9pvJI6czZ/XpORGYV7jDdLJ3TGXONdGnMt0NLCxq7e2N0A== X-Received: by 2002:a17:90b:344:: with SMTP id fh4mr24028543pjb.126.1584352902408; Mon, 16 Mar 2020 03:01:42 -0700 (PDT) Received: from sh03840pcu.spreadtrum.com ([117.18.48.82]) by smtp.gmail.com with ESMTPSA id 136sm63970411pgh.26.2020.03.16.03.01.39 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 16 Mar 2020 03:01:42 -0700 (PDT) From: Baolin Wang To: axboe@kernel.dk, paolo.valente@linaro.org, ulf.hansson@linaro.org, adrian.hunter@intel.com Cc: ming.lei@redhat.com, arnd@arndb.de, linus.walleij@linaro.org, baolin.wang7@gmail.com, orsonzhai@gmail.com, zhang.lyra@gmail.com, linux-mmc@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RESEND RFC PATCH 1/8] block: Change the dispatch_request() API to support batch requests Date: Mon, 16 Mar 2020 18:01:18 +0800 Message-Id: <08d5beb3981fa306c4a54cd399c4b015a66b2aab.1584350380.git.baolin.wang7@gmail.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org Now some SD/MMC host controllers can support packed command or packed request, that means we can package several requests to host controller at one time to improve performence. But the blk-mq always takes one request from the scheduler and dispatch it to the device, regardless of the driver or the scheduler, so there should only ever be one request in the local list in blk_mq_dispatch_rq_list(), that means the bd.last is always true and the driver can not use bd.last to decide if there are requests are pending now in hardware queue to help to package requests. Thus this is a preparation patch, which tries to change the dispatch_request() API to allow dispatching more than one request from the scheduler. Suggested-by: Arnd Bergmann Signed-off-by: Baolin Wang --- block/bfq-iosched.c | 12 +++++++++--- block/blk-mq-sched.c | 15 ++++----------- block/kyber-iosched.c | 20 +++++++++++++------- block/mq-deadline.c | 12 +++++++++--- include/linux/elevator.h | 2 +- 5 files changed, 36 insertions(+), 25 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 8c436ab..d7a128e 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -4789,7 +4789,8 @@ static inline void bfq_update_dispatch_stats(struct request_queue *q, bool idle_timer_disabled) {} #endif /* CONFIG_BFQ_CGROUP_DEBUG */ -static struct request *bfq_dispatch_request(struct blk_mq_hw_ctx *hctx) +static int bfq_dispatch_requests(struct blk_mq_hw_ctx *hctx, + struct list_head *list) { struct bfq_data *bfqd = hctx->queue->elevator->elevator_data; struct request *rq; @@ -4811,7 +4812,12 @@ static struct request *bfq_dispatch_request(struct blk_mq_hw_ctx *hctx) bfq_update_dispatch_stats(hctx->queue, rq, in_serv_queue, idle_timer_disabled); - return rq; + if (!rq) + return 0; + + list_add(&rq->queuelist, list); + + return 1; } /* @@ -6785,7 +6791,7 @@ static ssize_t bfq_low_latency_store(struct elevator_queue *e, .finish_request = bfq_finish_requeue_request, .exit_icq = bfq_exit_icq, .insert_requests = bfq_insert_requests, - .dispatch_request = bfq_dispatch_request, + .dispatch_requests = bfq_dispatch_requests, .next_request = elv_rb_latter_request, .former_request = elv_rb_former_request, .allow_merge = bfq_allow_bio_merge, diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index ca22afd..f49f9d9 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -90,28 +90,21 @@ static void blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx) struct request_queue *q = hctx->queue; struct elevator_queue *e = q->elevator; LIST_HEAD(rq_list); + int ret; do { - struct request *rq; - if (e->type->ops.has_work && !e->type->ops.has_work(hctx)) break; if (!blk_mq_get_dispatch_budget(hctx)) break; - rq = e->type->ops.dispatch_request(hctx); - if (!rq) { + ret = e->type->ops.dispatch_requests(hctx, &rq_list); + if (ret == 0) { blk_mq_put_dispatch_budget(hctx); break; } - /* - * Now this rq owns the budget which has to be released - * if this rq won't be queued to driver via .queue_rq() - * in blk_mq_dispatch_rq_list(). - */ - list_add(&rq->queuelist, &rq_list); } while (blk_mq_dispatch_rq_list(q, &rq_list, true)); } @@ -171,7 +164,7 @@ void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx) { struct request_queue *q = hctx->queue; struct elevator_queue *e = q->elevator; - const bool has_sched_dispatch = e && e->type->ops.dispatch_request; + const bool has_sched_dispatch = e && e->type->ops.dispatch_requests; LIST_HEAD(rq_list); /* RCU or SRCU read lock is needed before checking quiesced flag */ diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c index 34dcea0..8f58434 100644 --- a/block/kyber-iosched.c +++ b/block/kyber-iosched.c @@ -796,12 +796,13 @@ static int kyber_get_domain_token(struct kyber_queue_data *kqd, return NULL; } -static struct request *kyber_dispatch_request(struct blk_mq_hw_ctx *hctx) +static int kyber_dispatch_requests(struct blk_mq_hw_ctx *hctx, + struct list_head *list) { struct kyber_queue_data *kqd = hctx->queue->elevator->elevator_data; struct kyber_hctx_data *khd = hctx->sched_data; struct request *rq; - int i; + int i, ret = 0; spin_lock(&khd->lock); @@ -811,8 +812,11 @@ static struct request *kyber_dispatch_request(struct blk_mq_hw_ctx *hctx) */ if (khd->batching < kyber_batch_size[khd->cur_domain]) { rq = kyber_dispatch_cur_domain(kqd, khd, hctx); - if (rq) + if (rq) { + list_add(&rq->queuelist, list); + ret = 1; goto out; + } } /* @@ -832,14 +836,16 @@ static struct request *kyber_dispatch_request(struct blk_mq_hw_ctx *hctx) khd->cur_domain++; rq = kyber_dispatch_cur_domain(kqd, khd, hctx); - if (rq) + if (rq) { + list_add(&rq->queuelist, list); + ret = 1; goto out; + } } - rq = NULL; out: spin_unlock(&khd->lock); - return rq; + return ret; } static bool kyber_has_work(struct blk_mq_hw_ctx *hctx) @@ -1020,7 +1026,7 @@ static int kyber_batching_show(void *data, struct seq_file *m) .finish_request = kyber_finish_request, .requeue_request = kyber_finish_request, .completed_request = kyber_completed_request, - .dispatch_request = kyber_dispatch_request, + .dispatch_requests = kyber_dispatch_requests, .has_work = kyber_has_work, }, #ifdef CONFIG_BLK_DEBUG_FS diff --git a/block/mq-deadline.c b/block/mq-deadline.c index b490f47..9fbffba 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -378,7 +378,8 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd) * different hardware queue. This is because mq-deadline has shared * state for all hardware queues, in terms of sorting, FIFOs, etc. */ -static struct request *dd_dispatch_request(struct blk_mq_hw_ctx *hctx) +static int dd_dispatch_requests(struct blk_mq_hw_ctx *hctx, + struct list_head *list) { struct deadline_data *dd = hctx->queue->elevator->elevator_data; struct request *rq; @@ -387,7 +388,12 @@ static struct request *dd_dispatch_request(struct blk_mq_hw_ctx *hctx) rq = __dd_dispatch_request(dd); spin_unlock(&dd->lock); - return rq; + if (!rq) + return 0; + + list_add(&rq->queuelist, list); + + return 1; } static void dd_exit_queue(struct elevator_queue *e) @@ -774,7 +780,7 @@ static void deadline_dispatch_stop(struct seq_file *m, void *v) static struct elevator_type mq_deadline = { .ops = { .insert_requests = dd_insert_requests, - .dispatch_request = dd_dispatch_request, + .dispatch_requests = dd_dispatch_requests, .prepare_request = dd_prepare_request, .finish_request = dd_finish_request, .next_request = elv_rb_latter_request, diff --git a/include/linux/elevator.h b/include/linux/elevator.h index 901bda3..a65bf5d 100644 --- a/include/linux/elevator.h +++ b/include/linux/elevator.h @@ -42,7 +42,7 @@ struct elevator_mq_ops { void (*prepare_request)(struct request *, struct bio *bio); void (*finish_request)(struct request *); void (*insert_requests)(struct blk_mq_hw_ctx *, struct list_head *, bool); - struct request *(*dispatch_request)(struct blk_mq_hw_ctx *); + int (*dispatch_requests)(struct blk_mq_hw_ctx *, struct list_head *); bool (*has_work)(struct blk_mq_hw_ctx *); void (*completed_request)(struct request *, u64); void (*requeue_request)(struct request *); From patchwork Mon Mar 16 10:01:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 11440043 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A8C2B14B4 for ; Mon, 16 Mar 2020 10:01:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 804892071C for ; Mon, 16 Mar 2020 10:01:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="PortRNUL" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730539AbgCPKBs (ORCPT ); Mon, 16 Mar 2020 06:01:48 -0400 Received: from mail-pl1-f194.google.com ([209.85.214.194]:40982 "EHLO mail-pl1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730518AbgCPKBq (ORCPT ); Mon, 16 Mar 2020 06:01:46 -0400 Received: by mail-pl1-f194.google.com with SMTP id t16so746858plr.8; Mon, 16 Mar 2020 03:01:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=kgihqax6YeVrCLuO9o57/Db7OP77DO8IEq6SMDC5Qo8=; b=PortRNULQehrdmz+1FjlpE+FsZhu8X5GiKYISyqGV8WLLJR6gDJp+0LK1eSi3qTS/b DblqcKk1vVUDBY/g/5CSwFNEsRvG9SHOWDkeMbO31qwg0m7IcZpdYFFnTo7kXy7ajWad WkY5FPflcExVdKbLeB7ycKHqi7f41tF2TNbVtPQgpYRiUe94kd0P1DTajdn5gGHZrAOw U+ABDadCN0+WhuE/N7LQhDcL8uA9Mm/9Wgji6/XAyxupjV4N6uiraCaKXy/YyG6MgFSu gfebjTCjiznZ3lAEXUhPkSYQArVEsGQlxTxO7R19Jn3LYUyqeqg+NDukZAOduHQR5cUw Yesg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=kgihqax6YeVrCLuO9o57/Db7OP77DO8IEq6SMDC5Qo8=; b=kReQwzjxIcMUKF9Xjm9l8D7INPSjzKSa4PdPXsfsu/M5YUhJu7feEFlf4Wzu2Ddo3N n1ZWqGA1FVXg25hGETmHiwb7WYP26R7L7lEHMVGvtdKeW/BB6lT9SdRy4AltBTIayN/+ Un57ld7IJzFR2l5iQBhbudQeq0cCvkR8vuVvQ6qsfzBn3W2F3O4YvG67iKLEAEem/YWj vYGF2L3ExsZLcDieIZg/EkCmTn+4GhqwBxgZn+J/vVhQO0Wv6C8KhCKeZ390OT6alAgF waYHNs9P0fK7yNC3UXY0tsaOEG+NDSitmHab340oXJlnZSBFOTx+seHVJh57ilQwdpju ygFA== X-Gm-Message-State: ANhLgQ3TTVeL7s4aISdSD/iT2r5OSt3QrzdRGH5s66XPd/1OBZfPEs/c iHQcPlKBqKvEu803MSRLqDDduP1i X-Google-Smtp-Source: ADFU+vvnsLXtTZlSGMbwwvTQ6Lw0KA0Xp8VXqWdE7tOLnB7L0w2mToRedqylW/p5JhGioHwpWJEJsQ== X-Received: by 2002:a17:90b:1a8f:: with SMTP id ng15mr21129827pjb.55.1584352906128; Mon, 16 Mar 2020 03:01:46 -0700 (PDT) Received: from sh03840pcu.spreadtrum.com ([117.18.48.82]) by smtp.gmail.com with ESMTPSA id 136sm63970411pgh.26.2020.03.16.03.01.42 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 16 Mar 2020 03:01:45 -0700 (PDT) From: Baolin Wang To: axboe@kernel.dk, paolo.valente@linaro.org, ulf.hansson@linaro.org, adrian.hunter@intel.com Cc: ming.lei@redhat.com, arnd@arndb.de, linus.walleij@linaro.org, baolin.wang7@gmail.com, orsonzhai@gmail.com, zhang.lyra@gmail.com, linux-mmc@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RESEND RFC PATCH 2/8] block: Allow sending a batch of requests from the scheduler to hardware Date: Mon, 16 Mar 2020 18:01:19 +0800 Message-Id: X-Mailer: git-send-email 1.9.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org As we know, some SD/MMC host controllers can support packed request, that means we can package several requests to host controller at one time to improve performence. So the hardware driver expects the blk-mq can dispatch a batch of requests at one time, and driver can use bd.last to indicate if it is the last request in the batch to help to combine requests as much as possible. Thus we should add batch requests setting from the block driver to tell the scheduler how many requests can be dispatched in a batch, as well as changing the scheduler to dispatch more than one request if setting the maximum batch requests number. Signed-off-by: Baolin Wang Tested-by: Baolin Wang --- block/bfq-iosched.c | 32 ++++++++++++++-------- block/blk-mq.c | 2 -- block/blk-settings.c | 13 +++++++++ block/kyber-iosched.c | 74 +++++++++++++++++++++++++++----------------------- block/mq-deadline.c | 20 ++++++++++---- include/linux/blkdev.h | 8 ++++++ 6 files changed, 95 insertions(+), 54 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index d7a128e..9c1e3aa 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -4793,29 +4793,37 @@ static int bfq_dispatch_requests(struct blk_mq_hw_ctx *hctx, struct list_head *list) { struct bfq_data *bfqd = hctx->queue->elevator->elevator_data; + unsigned int batch_reqs = queue_max_batch_requests(hctx->queue) ? : 1; struct request *rq; struct bfq_queue *in_serv_queue; bool waiting_rq, idle_timer_disabled; + int i; - spin_lock_irq(&bfqd->lock); + for (i = 0; i < batch_reqs; i++) { + spin_lock_irq(&bfqd->lock); - in_serv_queue = bfqd->in_service_queue; - waiting_rq = in_serv_queue && bfq_bfqq_wait_request(in_serv_queue); + in_serv_queue = bfqd->in_service_queue; + waiting_rq = in_serv_queue && bfq_bfqq_wait_request(in_serv_queue); - rq = __bfq_dispatch_request(hctx); + rq = __bfq_dispatch_request(hctx); - idle_timer_disabled = - waiting_rq && !bfq_bfqq_wait_request(in_serv_queue); + idle_timer_disabled = + waiting_rq && !bfq_bfqq_wait_request(in_serv_queue); - spin_unlock_irq(&bfqd->lock); + spin_unlock_irq(&bfqd->lock); - bfq_update_dispatch_stats(hctx->queue, rq, in_serv_queue, - idle_timer_disabled); + bfq_update_dispatch_stats(hctx->queue, rq, in_serv_queue, + idle_timer_disabled); - if (!rq) - return 0; + if (!rq) { + if (list_empty(list)) + return 0; - list_add(&rq->queuelist, list); + return 1; + } + + list_add(&rq->queuelist, list); + } return 1; } diff --git a/block/blk-mq.c b/block/blk-mq.c index a12b176..2588e7a 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1193,8 +1193,6 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list, if (list_empty(list)) return false; - WARN_ON(!list_is_singular(list) && got_budget); - /* * Now process all the entries, sending them to the driver. */ diff --git a/block/blk-settings.c b/block/blk-settings.c index c8eda2e..8c0b325 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -59,6 +59,7 @@ void blk_set_default_limits(struct queue_limits *lim) lim->io_opt = 0; lim->misaligned = 0; lim->zoned = BLK_ZONED_NONE; + lim->max_batch_reqs = 1; } EXPORT_SYMBOL(blk_set_default_limits); @@ -871,6 +872,18 @@ bool blk_queue_can_use_dma_map_merging(struct request_queue *q, } EXPORT_SYMBOL_GPL(blk_queue_can_use_dma_map_merging); +/** + * blk_queue_max_batch_requests - set max requests for batch processing + * @q: the request queue for the device + * @max_batch_requests: maximum number of requests in a batch + **/ +void blk_queue_max_batch_requests(struct request_queue *q, + unsigned int max_batch_requests) +{ + q->limits.max_batch_reqs = max_batch_requests; +} +EXPORT_SYMBOL(blk_queue_max_batch_requests); + static int __init blk_settings_init(void) { blk_max_low_pfn = max_low_pfn - 1; diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c index 8f58434..3a84a5f 100644 --- a/block/kyber-iosched.c +++ b/block/kyber-iosched.c @@ -801,50 +801,56 @@ static int kyber_dispatch_requests(struct blk_mq_hw_ctx *hctx, { struct kyber_queue_data *kqd = hctx->queue->elevator->elevator_data; struct kyber_hctx_data *khd = hctx->sched_data; + unsigned int batch_reqs = queue_max_batch_requests(hctx->queue) ? : 1; struct request *rq; - int i, ret = 0; + int i, j, ret = 0; spin_lock(&khd->lock); - /* - * First, if we are still entitled to batch, try to dispatch a request - * from the batch. - */ - if (khd->batching < kyber_batch_size[khd->cur_domain]) { - rq = kyber_dispatch_cur_domain(kqd, khd, hctx); - if (rq) { - list_add(&rq->queuelist, list); - ret = 1; - goto out; + for (j = 0; j < batch_reqs; j++) { + /* + * First, if we are still entitled to batch, try to dispatch a + * request from the batch. + */ + if (khd->batching < kyber_batch_size[khd->cur_domain]) { + rq = kyber_dispatch_cur_domain(kqd, khd, hctx); + if (rq) { + list_add(&rq->queuelist, list); + ret = 1; + continue; + } } - } - - /* - * Either, - * 1. We were no longer entitled to a batch. - * 2. The domain we were batching didn't have any requests. - * 3. The domain we were batching was out of tokens. - * - * Start another batch. Note that this wraps back around to the original - * domain if no other domains have requests or tokens. - */ - khd->batching = 0; - for (i = 0; i < KYBER_NUM_DOMAINS; i++) { - if (khd->cur_domain == KYBER_NUM_DOMAINS - 1) - khd->cur_domain = 0; - else - khd->cur_domain++; - rq = kyber_dispatch_cur_domain(kqd, khd, hctx); - if (rq) { - list_add(&rq->queuelist, list); - ret = 1; - goto out; + /* + * Either, + * 1. We were no longer entitled to a batch. + * 2. The domain we were batching didn't have any requests. + * 3. The domain we were batching was out of tokens. + * + * Start another batch. Note that this wraps back around to the + * original domain if no other domains have requests or tokens. + */ + khd->batching = 0; + for (i = 0; i < KYBER_NUM_DOMAINS; i++) { + if (khd->cur_domain == KYBER_NUM_DOMAINS - 1) + khd->cur_domain = 0; + else + khd->cur_domain++; + + rq = kyber_dispatch_cur_domain(kqd, khd, hctx); + if (rq) { + list_add(&rq->queuelist, list); + ret = 1; + break; + } } } -out: spin_unlock(&khd->lock); + + if (list_empty(list)) + ret = 0; + return ret; } diff --git a/block/mq-deadline.c b/block/mq-deadline.c index 9fbffba..4e3d58a 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -382,16 +382,24 @@ static int dd_dispatch_requests(struct blk_mq_hw_ctx *hctx, struct list_head *list) { struct deadline_data *dd = hctx->queue->elevator->elevator_data; + unsigned int batch_reqs = queue_max_batch_requests(hctx->queue) ? : 1; struct request *rq; + int i; - spin_lock(&dd->lock); - rq = __dd_dispatch_request(dd); - spin_unlock(&dd->lock); + for (i = 0; i < batch_reqs; i++) { + spin_lock(&dd->lock); + rq = __dd_dispatch_request(dd); + spin_unlock(&dd->lock); - if (!rq) - return 0; + if (!rq) { + if (list_empty(list)) + return 0; - list_add(&rq->queuelist, list); + return 1; + } + + list_add(&rq->queuelist, list); + } return 1; } diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 053ea4b..d7032a0 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -338,6 +338,7 @@ struct queue_limits { unsigned int max_write_zeroes_sectors; unsigned int discard_granularity; unsigned int discard_alignment; + unsigned int max_batch_reqs; unsigned short max_segments; unsigned short max_integrity_segments; @@ -1109,6 +1110,8 @@ extern void blk_queue_required_elevator_features(struct request_queue *q, unsigned int features); extern bool blk_queue_can_use_dma_map_merging(struct request_queue *q, struct device *dev); +extern void blk_queue_max_batch_requests(struct request_queue *q, + unsigned int max_batch_requests); /* * Number of physical segments as sent to the device. @@ -1291,6 +1294,11 @@ static inline unsigned int queue_max_segment_size(const struct request_queue *q) return q->limits.max_segment_size; } +static inline unsigned int queue_max_batch_requests(const struct request_queue *q) +{ + return q->limits.max_batch_reqs; +} + static inline unsigned queue_logical_block_size(const struct request_queue *q) { int retval = 512; From patchwork Mon Mar 16 10:01:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 11440045 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 75EB714B4 for ; Mon, 16 Mar 2020 10:01:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 443A32051A for ; Mon, 16 Mar 2020 10:01:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="oMB8rNhb" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730551AbgCPKBv (ORCPT ); Mon, 16 Mar 2020 06:01:51 -0400 Received: from mail-pg1-f195.google.com ([209.85.215.195]:41191 "EHLO mail-pg1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730518AbgCPKBv (ORCPT ); Mon, 16 Mar 2020 06:01:51 -0400 Received: by mail-pg1-f195.google.com with SMTP id b1so9451739pgm.8; Mon, 16 Mar 2020 03:01:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=lK1tu7FG6Gmt2PZsswSWIBP1KAU/0m3WCpgCgO+YW3s=; b=oMB8rNhbm657lnTOnnEUNQJK42/ljp3Z4Tj8wK0axvwwpOsR5DfkPKmQBH0tbpdOmc PiX/WAEM1MwOlGEZ0560IgtjdIkgsfI/1woidX1bB2VuoYPharxiOaWPwyW9zbdD1kSO oTgkYz7Jki/X6y+6Rq6EiRfjBPycVXQwfXptL4RlrjCfCe0XBfPPWUaVZ4iTAJma7guo bi9BAPuMlRKZ8TtO8qD2Ca/8ekYoYBCxhXf/HAd5anBdNcX1xThsAFS1IxSfl9ae6l95 cWNJ7NX+WvmQcdmSFJI3lp/WCzoj/SBnYzhhy3ID4+PILcF1sJhWeVu7g09r/tBrjNnb WU0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=lK1tu7FG6Gmt2PZsswSWIBP1KAU/0m3WCpgCgO+YW3s=; b=mm2MTnodRQ/v5Db0un1HLJjmGYdMJe/4GVXPextqVvbbJVwoIeQ/6VaqJ3I2+kgvj8 OM9vGJhJMtdvi6fWndyfhPmT4heZU86SkXob8JmKyZfZNf/vQKEq0n2YAjmgKQ1j65ZM 4UG04QfX3dABswPRZz5gnRmuga2XYLxWel8V2jEtVNAB7SHCaq3wUQWipGCI8wGUVKY2 a2eF9oUE9JZvftf30XwepHQxH8iMvhBgrYJmtsnPvlNHP15+KoivKCC0J5ocMVuVyLf5 NmunPbYbMxScn7DIlU7/CIw+oGL3uQHLbhxLPm8qeDUfY8CZtrvinbYdFrT+vAFB7LRq t2fg== X-Gm-Message-State: ANhLgQ2+SZ/8+2W8iAM18Q8xFrz4k5r64WDtnDlWfYWXly4Sqdu0f119 WUDHuNoSzCd0nzb0NqVu8HM= X-Google-Smtp-Source: ADFU+vt6uj7Xil435cwOJUT+4MIVGCtJODt9QyqmfX1533Xqg5EqeepP3zvOz2QvS5iivCNDk6d4/g== X-Received: by 2002:a63:c550:: with SMTP id g16mr27146624pgd.9.1584352909830; Mon, 16 Mar 2020 03:01:49 -0700 (PDT) Received: from sh03840pcu.spreadtrum.com ([117.18.48.82]) by smtp.gmail.com with ESMTPSA id 136sm63970411pgh.26.2020.03.16.03.01.46 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 16 Mar 2020 03:01:49 -0700 (PDT) From: Baolin Wang To: axboe@kernel.dk, paolo.valente@linaro.org, ulf.hansson@linaro.org, adrian.hunter@intel.com Cc: ming.lei@redhat.com, arnd@arndb.de, linus.walleij@linaro.org, baolin.wang7@gmail.com, orsonzhai@gmail.com, zhang.lyra@gmail.com, linux-mmc@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RESEND RFC PATCH 3/8] mmc: Add MMC packed request support for MMC software queue Date: Mon, 16 Mar 2020 18:01:20 +0800 Message-Id: X-Mailer: git-send-email 1.9.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org Some SD controllers can support packed command or packed request, that means it can package several requests to host controller to be handled at one time, which can reduce interrutps and improve the DMA transfer. As a result, the I/O performence can be improved. Thus this patch adds MMC packed function to support packed requests or packed command based on the MMC software queue. The basic concept of this function is that, we try to collect more requests from block layer as much as possible to be linked into MMC packed queue by mmc_blk_hsq_issue_rw_rq(). When the last request of the hardware queue comes, or the collected request numbers are larger than 16, or a larger request comes, then we can start to pakage a packed request to host controller. The MMC packed function also supplies packed algorithm operations to help to package qualified requests. After finishing the packed request, the MMC packed function will help to complete each request, at the same time, the MMC packed queue will allow to collect more requests from block layer. After completing each request, the MMC packed function can try to package another packed request to host controller directly in the complete path, if there are enough requests in MMC packed queue or the request pending flag is not set. If the pending flag was set, we should let the mmc_blk_hsq_issue_rw_rq() collect more request as much as possible. Signed-off-by: Baolin Wang --- drivers/mmc/core/block.c | 35 ++++-- drivers/mmc/core/block.h | 3 +- drivers/mmc/core/core.c | 26 ++++ drivers/mmc/core/core.h | 2 + drivers/mmc/core/queue.c | 22 +++- drivers/mmc/host/mmc_hsq.c | 271 ++++++++++++++++++++++++++++++++++++------ drivers/mmc/host/mmc_hsq.h | 25 +++- drivers/mmc/host/sdhci-sprd.c | 2 +- include/linux/mmc/core.h | 6 + include/linux/mmc/host.h | 9 ++ 10 files changed, 351 insertions(+), 50 deletions(-) diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 55d52fc..19c589e 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -1537,7 +1537,8 @@ static int mmc_blk_cqe_issue_flush(struct mmc_queue *mq, struct request *req) return mmc_blk_cqe_start_req(mq->card->host, mrq); } -static int mmc_blk_hsq_issue_rw_rq(struct mmc_queue *mq, struct request *req) +static int mmc_blk_hsq_issue_rw_rq(struct mmc_queue *mq, struct request *req, + bool last) { struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req); struct mmc_host *host = mq->card->host; @@ -1548,19 +1549,25 @@ static int mmc_blk_hsq_issue_rw_rq(struct mmc_queue *mq, struct request *req) mmc_pre_req(host, &mqrq->brq.mrq); err = mmc_cqe_start_req(host, &mqrq->brq.mrq); - if (err) + if (err) { mmc_post_req(host, &mqrq->brq.mrq, err); + return err; + } - return err; + if (last) + mmc_cqe_commit_rqs(host); + + return 0; } -static int mmc_blk_cqe_issue_rw_rq(struct mmc_queue *mq, struct request *req) +static int mmc_blk_cqe_issue_rw_rq(struct mmc_queue *mq, struct request *req, + bool last) { struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req); struct mmc_host *host = mq->card->host; if (host->hsq_enabled) - return mmc_blk_hsq_issue_rw_rq(mq, req); + return mmc_blk_hsq_issue_rw_rq(mq, req, last); mmc_blk_data_prep(mq, mqrq, 0, NULL, NULL); @@ -1959,6 +1966,18 @@ static void mmc_blk_hsq_req_done(struct mmc_request *mrq) if (mmc_blk_rq_error(&mqrq->brq) || mmc_blk_urgent_bkops_needed(mq, mqrq)) { spin_lock_irqsave(&mq->lock, flags); + /* + * The HSQ may complete more than one requests at one time + * for the packed request mode. So if there is one recovery + * request is pending, the following error requests just + * should be completed directly, since we should not do + * recovery continuously. + */ + if (mq->recovery_needed) { + spin_unlock_irqrestore(&mq->lock, flags); + goto out; + } + mq->recovery_needed = true; mq->recovery_req = req; spin_unlock_irqrestore(&mq->lock, flags); @@ -1971,6 +1990,7 @@ static void mmc_blk_hsq_req_done(struct mmc_request *mrq) mmc_blk_rw_reset_success(mq, req); +out: /* * Block layer timeouts race with completions which means the normal * completion path cannot be used during recovery. @@ -2236,7 +2256,8 @@ static int mmc_blk_wait_for_idle(struct mmc_queue *mq, struct mmc_host *host) return mmc_blk_rw_wait(mq, NULL); } -enum mmc_issued mmc_blk_mq_issue_rq(struct mmc_queue *mq, struct request *req) +enum mmc_issued mmc_blk_mq_issue_rq(struct mmc_queue *mq, struct request *req, + bool last) { struct mmc_blk_data *md = mq->blkdata; struct mmc_card *card = md->queue.card; @@ -2280,7 +2301,7 @@ enum mmc_issued mmc_blk_mq_issue_rq(struct mmc_queue *mq, struct request *req) case REQ_OP_READ: case REQ_OP_WRITE: if (mq->use_cqe) - ret = mmc_blk_cqe_issue_rw_rq(mq, req); + ret = mmc_blk_cqe_issue_rw_rq(mq, req, last); else ret = mmc_blk_mq_issue_rw_rq(mq, req); break; diff --git a/drivers/mmc/core/block.h b/drivers/mmc/core/block.h index 31153f6..8bfb89f 100644 --- a/drivers/mmc/core/block.h +++ b/drivers/mmc/core/block.h @@ -9,7 +9,8 @@ enum mmc_issued; -enum mmc_issued mmc_blk_mq_issue_rq(struct mmc_queue *mq, struct request *req); +enum mmc_issued mmc_blk_mq_issue_rq(struct mmc_queue *mq, struct request *req, + bool last); void mmc_blk_mq_complete(struct request *req); void mmc_blk_mq_recovery(struct mmc_queue *mq); diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index aa54d35..f96c0b1 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -329,6 +329,7 @@ static int mmc_mrq_prep(struct mmc_host *host, struct mmc_request *mrq) } } + INIT_LIST_HEAD(&mrq->list); return 0; } @@ -536,6 +537,31 @@ void mmc_cqe_post_req(struct mmc_host *host, struct mmc_request *mrq) } EXPORT_SYMBOL(mmc_cqe_post_req); +/** + * mmc_cqe_commit_rqs - Commit requests pending in CQE + * @host: MMC host + * @last: Indicate if the last request from block layer + */ +void mmc_cqe_commit_rqs(struct mmc_host *host) +{ + if (host->cqe_ops->cqe_commit_rqs) + host->cqe_ops->cqe_commit_rqs(host); +} +EXPORT_SYMBOL(mmc_cqe_commit_rqs); + +/** + * mmc_cqe_is_busy - If CQE is busy or not + * @host: MMC host + */ +bool mmc_cqe_is_busy(struct mmc_host *host) +{ + if (host->cqe_ops->cqe_is_busy) + return host->cqe_ops->cqe_is_busy(host); + + return false; +} +EXPORT_SYMBOL(mmc_cqe_is_busy); + /* Arbitrary 1 second timeout */ #define MMC_CQE_RECOVERY_TIMEOUT 1000 diff --git a/drivers/mmc/core/core.h b/drivers/mmc/core/core.h index 575ac02..db81ba2 100644 --- a/drivers/mmc/core/core.h +++ b/drivers/mmc/core/core.h @@ -139,6 +139,8 @@ static inline void mmc_claim_host(struct mmc_host *host) int mmc_cqe_start_req(struct mmc_host *host, struct mmc_request *mrq); void mmc_cqe_post_req(struct mmc_host *host, struct mmc_request *mrq); int mmc_cqe_recovery(struct mmc_host *host); +void mmc_cqe_commit_rqs(struct mmc_host *host); +bool mmc_cqe_is_busy(struct mmc_host *host); /** * mmc_pre_req - Prepare for a new request diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index 25bee3d..774ea82 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -285,11 +285,7 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx, } break; case MMC_ISSUE_ASYNC: - /* - * For MMC host software queue, we only allow 2 requests in - * flight to avoid a long latency. - */ - if (host->hsq_enabled && mq->in_flight[issue_type] > 2) { + if (mq->use_cqe && mmc_cqe_is_busy(host)) { spin_unlock_irq(&mq->lock); return BLK_STS_RESOURCE; } @@ -330,7 +326,7 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx, blk_mq_start_request(req); - issued = mmc_blk_mq_issue_rq(mq, req); + issued = mmc_blk_mq_issue_rq(mq, req, bd->last); switch (issued) { case MMC_REQ_BUSY: @@ -362,8 +358,19 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx, return ret; } +static void mmc_mq_commit_rqs(struct blk_mq_hw_ctx *hctx) +{ + struct mmc_queue *mq = hctx->queue->queuedata; + struct mmc_card *card = mq->card; + struct mmc_host *host = card->host; + + if (mq->use_cqe) + mmc_cqe_commit_rqs(host); +} + static const struct blk_mq_ops mmc_mq_ops = { .queue_rq = mmc_mq_queue_rq, + .commit_rqs = mmc_mq_commit_rqs, .init_request = mmc_mq_init_request, .exit_request = mmc_mq_exit_request, .complete = mmc_blk_mq_complete, @@ -390,6 +397,9 @@ static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card) "merging was advertised but not possible"); blk_queue_max_segments(mq->queue, mmc_get_max_segments(host)); + if (host->max_packed_reqs != 0) + blk_queue_max_batch_requests(mq->queue, host->max_packed_reqs); + if (mmc_card_mmc(card)) block_size = card->ext_csd.data_sector_size; diff --git a/drivers/mmc/host/mmc_hsq.c b/drivers/mmc/host/mmc_hsq.c index aafc0d1..c0ba24e 100644 --- a/drivers/mmc/host/mmc_hsq.c +++ b/drivers/mmc/host/mmc_hsq.c @@ -9,6 +9,7 @@ #include #include +#include #include #include "mmc_hsq.h" @@ -16,16 +17,46 @@ #define HSQ_NUM_SLOTS 64 #define HSQ_INVALID_TAG HSQ_NUM_SLOTS +#define HSQ_PACKED_FLUSH_BLOCKS 256 + +/** + * mmc_hsq_packed_algo_rw - the algorithm to package read or write requests + * @mmc: the host controller + * + * TODO: we can add more condition to decide if we can package this + * request or not. + */ +void mmc_hsq_packed_algo_rw(struct mmc_host *mmc) +{ + struct mmc_hsq *hsq = mmc->cqe_private; + struct hsq_packed *packed = hsq->packed; + struct mmc_packed_request *prq = &packed->prq; + struct mmc_request *mrq, *t; + u32 i = 0; + + list_for_each_entry_safe(mrq, t, &packed->list, list) { + if (++i > packed->max_entries) + break; + + list_move_tail(&mrq->list, &prq->list); + prq->nr_reqs++; + } +} +EXPORT_SYMBOL_GPL(mmc_hsq_packed_algo_rw); + static void mmc_hsq_pump_requests(struct mmc_hsq *hsq) { + struct hsq_packed *packed = hsq->packed; struct mmc_host *mmc = hsq->mmc; struct hsq_slot *slot; + struct mmc_request *mrq; unsigned long flags; + int ret; spin_lock_irqsave(&hsq->lock, flags); /* Make sure we are not already running a request now */ - if (hsq->mrq) { + if (hsq->mrq || (packed && packed->prq.nr_reqs)) { spin_unlock_irqrestore(&hsq->lock, flags); return; } @@ -36,16 +67,58 @@ static void mmc_hsq_pump_requests(struct mmc_hsq *hsq) return; } - slot = &hsq->slot[hsq->next_tag]; - hsq->mrq = slot->mrq; - hsq->qcnt--; + if (packed) { + /* Try to package requests */ + packed->ops->packed_algo(mmc); + + packed->busy = true; + hsq->qcnt -= packed->prq.nr_reqs; + } else { + slot = &hsq->slot[hsq->next_tag]; + hsq->mrq = slot->mrq; + hsq->qcnt--; + } spin_unlock_irqrestore(&hsq->lock, flags); - if (mmc->ops->request_atomic) - mmc->ops->request_atomic(mmc, hsq->mrq); - else - mmc->ops->request(mmc, hsq->mrq); + if (!packed) { + if (mmc->ops->request_atomic) + mmc->ops->request_atomic(mmc, hsq->mrq); + else + mmc->ops->request(mmc, hsq->mrq); + + return; + } + + if (packed->ops->prepare_hardware) { + ret = packed->ops->prepare_hardware(mmc); + if (ret) { + pr_err("failed to prepare hardware\n"); + goto error; + } + } + + ret = packed->ops->packed_request(mmc, &packed->prq); + if (ret) { + pr_err("failed to packed requests\n"); + goto error; + } + + return; + +error: + spin_lock_irqsave(&hsq->lock, flags); + + list_for_each_entry(mrq, &packed->prq.list, list) { + struct mmc_data *data = mrq->data; + + data->error = ret; + data->bytes_xfered = 0; + } + + spin_unlock_irqrestore(&hsq->lock, flags); + + mmc_hsq_finalize_packed_request(mmc, &packed->prq); } static void mmc_hsq_update_next_tag(struct mmc_hsq *hsq, int remains) @@ -87,16 +160,21 @@ static void mmc_hsq_update_next_tag(struct mmc_hsq *hsq, int remains) static void mmc_hsq_post_request(struct mmc_hsq *hsq) { + struct hsq_packed *packed = hsq->packed; unsigned long flags; int remains; spin_lock_irqsave(&hsq->lock, flags); remains = hsq->qcnt; - hsq->mrq = NULL; + if (packed) { + packed->prq.nr_reqs = 0; + } else { + hsq->mrq = NULL; - /* Update the next available tag to be queued. */ - mmc_hsq_update_next_tag(hsq, remains); + /* Update the next available tag to be queued. */ + mmc_hsq_update_next_tag(hsq, remains); + } if (hsq->waiting_for_idle && !remains) { hsq->waiting_for_idle = false; @@ -111,10 +189,6 @@ static void mmc_hsq_post_request(struct mmc_hsq *hsq) spin_unlock_irqrestore(&hsq->lock, flags); - /* - * Try to pump new request to host controller as fast as possible, - * after completing previous request. - */ if (remains > 0) mmc_hsq_pump_requests(hsq); } @@ -154,6 +228,91 @@ bool mmc_hsq_finalize_request(struct mmc_host *mmc, struct mmc_request *mrq) } EXPORT_SYMBOL_GPL(mmc_hsq_finalize_request); +/** + * mmc_hsq_finalize_packed_request - finalize one packed request + * @mmc: the host controller + * @prq: the packed request need to be finalized + */ +void mmc_hsq_finalize_packed_request(struct mmc_host *mmc, + struct mmc_packed_request *prq) +{ + struct mmc_hsq *hsq = mmc->cqe_private; + struct hsq_packed *packed = hsq->packed; + struct mmc_request *mrq, *t; + LIST_HEAD(head); + unsigned long flags; + + if (!packed || !prq) + return; + + if (packed->ops->unprepare_hardware && + packed->ops->unprepare_hardware(mmc)) + pr_err("failed to unprepare hardware\n"); + + /* + * Clear busy flag to allow collecting more requests into command + * queue, but now we can not pump them to controller, we should wait + * for all requests are completed. During the period of completing + * requests, we should collect more requests from block layer as much + * as possible. + */ + spin_lock_irqsave(&hsq->lock, flags); + list_splice_tail_init(&prq->list, &head); + packed->busy = false; + spin_unlock_irqrestore(&hsq->lock, flags); + + list_for_each_entry_safe(mrq, t, &head, list) { + list_del(&mrq->list); + + mmc_cqe_request_done(mmc, mrq); + } + + mmc_hsq_post_request(hsq); +} +EXPORT_SYMBOL_GPL(mmc_hsq_finalize_packed_request); + +static void mmc_hsq_commit_rqs(struct mmc_host *mmc) +{ + struct mmc_hsq *hsq = mmc->cqe_private; + struct hsq_packed *packed = hsq->packed; + + if (!packed) + return; + + mmc_hsq_pump_requests(hsq); +} + +static bool mmc_hsq_is_busy(struct mmc_host *mmc) +{ + struct mmc_hsq *hsq = mmc->cqe_private; + struct hsq_packed *packed = hsq->packed; + unsigned long flags; + bool busy; + + spin_lock_irqsave(&hsq->lock, flags); + + /* + * For packed mode, when hardware is busy, we can only allow maximum + * packed number requests to be ready in software queue to be queued + * after previous packed request is completed, which avoiding long + * latency. + * + * For non-packed mode, we can only allow 2 requests in flight to avoid + * long latency. + * + * Otherwise return BLK_STS_RESOURCE to tell block layer to dispatch + * requests later. + */ + if (packed) + busy = packed->busy && hsq->qcnt >= packed->max_entries; + else + busy = hsq->qcnt > 1; + + spin_unlock_irqrestore(&hsq->lock, flags); + + return busy; +} + static void mmc_hsq_recovery_start(struct mmc_host *mmc) { struct mmc_hsq *hsq = mmc->cqe_private; @@ -189,7 +348,8 @@ static void mmc_hsq_recovery_finish(struct mmc_host *mmc) static int mmc_hsq_request(struct mmc_host *mmc, struct mmc_request *mrq) { struct mmc_hsq *hsq = mmc->cqe_private; - int tag = mrq->tag; + struct hsq_packed *packed = hsq->packed; + int nr_rqs = 0, tag = mrq->tag; spin_lock_irq(&hsq->lock); @@ -204,20 +364,37 @@ static int mmc_hsq_request(struct mmc_host *mmc, struct mmc_request *mrq) return -EBUSY; } - hsq->slot[tag].mrq = mrq; + hsq->qcnt++; - /* - * Set the next tag as current request tag if no available - * next tag. - */ - if (hsq->next_tag == HSQ_INVALID_TAG) - hsq->next_tag = tag; + if (packed) { + list_add_tail(&mrq->list, &packed->list); - hsq->qcnt++; + nr_rqs = hsq->qcnt; + } else { + hsq->slot[tag].mrq = mrq; + + /* + * Set the next tag as current request tag if no available + * next tag. + */ + if (hsq->next_tag == HSQ_INVALID_TAG) + hsq->next_tag = tag; + } spin_unlock_irq(&hsq->lock); - mmc_hsq_pump_requests(hsq); + /* + * For non-packed request mode, we should pump requests as soon as + * possible. + * + * For the packed request mode, if it is a larger request or the + * request count is larger than the maximum packed number, we + * should pump requests to controller. Otherwise we should try to + * combine requests as much as we can. + */ + if (!packed || mrq->data->blocks > HSQ_PACKED_FLUSH_BLOCKS || + nr_rqs >= packed->max_entries) + mmc_hsq_pump_requests(hsq); return 0; } @@ -230,12 +407,17 @@ static void mmc_hsq_post_req(struct mmc_host *mmc, struct mmc_request *mrq) static bool mmc_hsq_queue_is_idle(struct mmc_hsq *hsq, int *ret) { + struct hsq_packed *packed = hsq->packed; bool is_idle; spin_lock_irq(&hsq->lock); - is_idle = (!hsq->mrq && !hsq->qcnt) || - hsq->recovery_halt; + if (packed) + is_idle = (!packed->prq.nr_reqs && !hsq->qcnt) || + hsq->recovery_halt; + else + is_idle = (!hsq->mrq && !hsq->qcnt) || + hsq->recovery_halt; *ret = hsq->recovery_halt ? -EBUSY : 0; hsq->waiting_for_idle = !is_idle; @@ -312,17 +494,38 @@ static int mmc_hsq_enable(struct mmc_host *mmc, struct mmc_card *card) .cqe_wait_for_idle = mmc_hsq_wait_for_idle, .cqe_recovery_start = mmc_hsq_recovery_start, .cqe_recovery_finish = mmc_hsq_recovery_finish, + .cqe_is_busy = mmc_hsq_is_busy, + .cqe_commit_rqs = mmc_hsq_commit_rqs, }; -int mmc_hsq_init(struct mmc_hsq *hsq, struct mmc_host *mmc) +int mmc_hsq_init(struct mmc_hsq *hsq, struct mmc_host *mmc, + const struct hsq_packed_ops *ops, int max_packed) { - hsq->num_slots = HSQ_NUM_SLOTS; - hsq->next_tag = HSQ_INVALID_TAG; + if (ops && max_packed > 1) { + struct hsq_packed *packed; + + packed = devm_kzalloc(mmc_dev(mmc), sizeof(struct hsq_packed), + GFP_KERNEL); + if (!packed) + return -ENOMEM; + + packed->ops = ops; + packed->max_entries = max_packed; + INIT_LIST_HEAD(&packed->list); + INIT_LIST_HEAD(&packed->prq.list); + + hsq->packed = packed; + mmc->max_packed_reqs = max_packed; + } else { + hsq->num_slots = HSQ_NUM_SLOTS; + hsq->next_tag = HSQ_INVALID_TAG; - hsq->slot = devm_kcalloc(mmc_dev(mmc), hsq->num_slots, - sizeof(struct hsq_slot), GFP_KERNEL); - if (!hsq->slot) - return -ENOMEM; + hsq->slot = devm_kcalloc(mmc_dev(mmc), hsq->num_slots, + sizeof(struct hsq_slot), + GFP_KERNEL); + if (!hsq->slot) + return -ENOMEM; + } hsq->mmc = mmc; hsq->mmc->cqe_private = hsq; diff --git a/drivers/mmc/host/mmc_hsq.h b/drivers/mmc/host/mmc_hsq.h index 18b9cf5..6fcc0dc 100644 --- a/drivers/mmc/host/mmc_hsq.h +++ b/drivers/mmc/host/mmc_hsq.h @@ -2,6 +2,23 @@ #ifndef LINUX_MMC_HSQ_H #define LINUX_MMC_HSQ_H +struct hsq_packed_ops { + void (*packed_algo)(struct mmc_host *mmc); + int (*prepare_hardware)(struct mmc_host *mmc); + int (*unprepare_hardware)(struct mmc_host *mmc); + int (*packed_request)(struct mmc_host *mmc, + struct mmc_packed_request *prq); +}; + +struct hsq_packed { + bool busy; + int max_entries; + + struct list_head list; + struct mmc_packed_request prq; + const struct hsq_packed_ops *ops; +}; + struct hsq_slot { struct mmc_request *mrq; }; @@ -20,11 +37,17 @@ struct mmc_hsq { bool enabled; bool waiting_for_idle; bool recovery_halt; + + struct hsq_packed *packed; }; -int mmc_hsq_init(struct mmc_hsq *hsq, struct mmc_host *mmc); +int mmc_hsq_init(struct mmc_hsq *hsq, struct mmc_host *mmc, + const struct hsq_packed_ops *ops, int max_packed); void mmc_hsq_suspend(struct mmc_host *mmc); int mmc_hsq_resume(struct mmc_host *mmc); bool mmc_hsq_finalize_request(struct mmc_host *mmc, struct mmc_request *mrq); +void mmc_hsq_finalize_packed_request(struct mmc_host *mmc, + struct mmc_packed_request *prq); +void mmc_hsq_packed_algo_rw(struct mmc_host *mmc); #endif diff --git a/drivers/mmc/host/sdhci-sprd.c b/drivers/mmc/host/sdhci-sprd.c index ae9acb8..49afe1c 100644 --- a/drivers/mmc/host/sdhci-sprd.c +++ b/drivers/mmc/host/sdhci-sprd.c @@ -676,7 +676,7 @@ static int sdhci_sprd_probe(struct platform_device *pdev) goto err_cleanup_host; } - ret = mmc_hsq_init(hsq, host->mmc); + ret = mmc_hsq_init(hsq, host->mmc, NULL, 0); if (ret) goto err_cleanup_host; diff --git a/include/linux/mmc/core.h b/include/linux/mmc/core.h index b7ba881..b1be983 100644 --- a/include/linux/mmc/core.h +++ b/include/linux/mmc/core.h @@ -165,6 +165,12 @@ struct mmc_request { bool cap_cmd_during_tfr; int tag; + struct list_head list; +}; + +struct mmc_packed_request { + struct list_head list; + u32 nr_reqs; }; struct mmc_card; diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index db5e59c..04b45d0 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -216,6 +216,14 @@ struct mmc_cqe_ops { * will have zero data bytes transferred. */ void (*cqe_recovery_finish)(struct mmc_host *host); + + /* If CQE is busy or not. */ + bool (*cqe_is_busy)(struct mmc_host *host); + /* + * Serve the purpose of kicking the hardware to handle pending + * requests. + */ + void (*cqe_commit_rqs)(struct mmc_host *host); }; struct mmc_async_req { @@ -385,6 +393,7 @@ struct mmc_host { unsigned int max_blk_size; /* maximum size of one mmc block */ unsigned int max_blk_count; /* maximum number of blocks in one req */ unsigned int max_busy_timeout; /* max busy timeout in ms */ + unsigned int max_packed_reqs; /* max number of requests can be packed */ /* private data */ spinlock_t lock; /* lock for claim and bus ops */ From patchwork Mon Mar 16 10:01:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 11440065 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7A62C6CA for ; Mon, 16 Mar 2020 10:02:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5AF922073C for ; Mon, 16 Mar 2020 10:02:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Jr5xEAiM" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730575AbgCPKB5 (ORCPT ); Mon, 16 Mar 2020 06:01:57 -0400 Received: from mail-pl1-f193.google.com ([209.85.214.193]:38974 "EHLO mail-pl1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730570AbgCPKB4 (ORCPT ); Mon, 16 Mar 2020 06:01:56 -0400 Received: by mail-pl1-f193.google.com with SMTP id j20so7767548pll.6; Mon, 16 Mar 2020 03:01:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=o1DIsdYS7ff4jJb5jCTI/tsP5spVoqhTKmmU077/da4=; b=Jr5xEAiMjYgi7TOw021o1pEmQUbJ9AgMAYqAtB0HXbgBj/U4qKR5z3fnbBB4GLIWkb XFlvpQRpaitJ+hDHvUQZ9JER5Ew3AtF46351sY5KpcRbQ0c7ke5DKR6OJ7qGiLby6bFh ZHt2QbHsPCPDC4fjg15Md4YanbD7UvfXL2JmBwPRLmPKv1BpjqM2f4VGMvjrZVJMKfxz OD603oxOD8Vn8Tlzs4GSDcVg8RwrVcyEX+NZ2oFKvtjdFuGMicQdNh14EO+GsqUOXW4v kbbkDAe2oPydcRovbTYMmkfnSzegdSky90vp1gHDWldw2Zzpv4a88CwSX3wyqvLNr4Kn S6aA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=o1DIsdYS7ff4jJb5jCTI/tsP5spVoqhTKmmU077/da4=; b=S9CxhPDfJNNo4Gnx01mc4YG3jcvEfLb632kbT40UHWjNRIpZxgWK+MZn4YWvb4IKOa O7kmqyWRwxuPwRwZGdItHSjZCb/7kfZ9PtIKq1K/aZJnpn6C8Iq7tXowWBFuyH/pQRTK EC0IpYYFzUyzmgE6afBJUdqF5RKADr6G7yPXBuvQ9wCzs13KIbNtavIX5WLAHj2j1hAG fOAVotw9JXyol/m/l4t9vIzVeTVKg9LRDs9u/da8X9NxoG3EvBwS5W7RcBR6WeI/+E/w f7cBw+Ej8FnUmAB758DC6pLdCy8/QKbJfkwed/l3Ay9doWGxmYY8gDfEnAMao285/b74 BAww== X-Gm-Message-State: ANhLgQ3q8aI59D+4aNAOJ8X/CnOqz1cDHcpvTm/XESf8EWFM9rcvk4Fd ioAJKZHUaBAmCwuyJWbkuno= X-Google-Smtp-Source: ADFU+vvAIAXggVnK9BXVLESVBN4pJu4tsfGabg1WukYRfyJanyZ6iLhHpKnnELZbDfK8Q4We8pMlig== X-Received: by 2002:a17:90a:218e:: with SMTP id q14mr10943428pjc.37.1584352913547; Mon, 16 Mar 2020 03:01:53 -0700 (PDT) Received: from sh03840pcu.spreadtrum.com ([117.18.48.82]) by smtp.gmail.com with ESMTPSA id 136sm63970411pgh.26.2020.03.16.03.01.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 16 Mar 2020 03:01:53 -0700 (PDT) From: Baolin Wang To: axboe@kernel.dk, paolo.valente@linaro.org, ulf.hansson@linaro.org, adrian.hunter@intel.com Cc: ming.lei@redhat.com, arnd@arndb.de, linus.walleij@linaro.org, baolin.wang7@gmail.com, orsonzhai@gmail.com, zhang.lyra@gmail.com, linux-mmc@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RESEND RFC PATCH 4/8] mmc: host: sdhci: Introduce ADMA3 transfer mode Date: Mon, 16 Mar 2020 18:01:21 +0800 Message-Id: X-Mailer: git-send-email 1.9.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org The standard SD host controller can support ADMA3 transfer mode optionally. The ADMA3 uses command descriptor to issue an SD command, and a multi-block data transfer is programmed by using a pair of command descriptor and ADMA2 descriptor. ADMA3 performs multiple of multi-block data transfer by using integrated descriptor. This is a preparation patch to add ADMA3 structures and help to expand the ADMA buffer for support ADMA3 transfer mode. Signed-off-by: Baolin Wang --- drivers/mmc/host/sdhci.c | 106 +++++++++++++++++++++++++++++++++++++++-------- drivers/mmc/host/sdhci.h | 48 +++++++++++++++++++++ 2 files changed, 137 insertions(+), 17 deletions(-) diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c index 4febbcb..6238b5c 100644 --- a/drivers/mmc/host/sdhci.c +++ b/drivers/mmc/host/sdhci.c @@ -772,7 +772,8 @@ static void sdhci_adma_table_pre(struct sdhci_host *host, * If this triggers then we have a calculation bug * somewhere. :/ */ - WARN_ON((desc - host->adma_table) >= host->adma_table_sz); + WARN_ON((desc - host->adma_table) >= + host->adma_table_sz * host->adma3_table_cnt); } if (host->quirks & SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC) { @@ -4012,10 +4013,17 @@ int sdhci_setup_host(struct sdhci_host *host) (host->caps & SDHCI_CAN_DO_ADMA2)) host->flags |= SDHCI_USE_ADMA; + if ((host->quirks2 & SDHCI_QUIRK2_USE_ADMA3_SUPPORT) && + (host->flags & SDHCI_USE_ADMA) && + (host->caps1 & SDHCI_CAN_DO_ADMA3)) { + DBG("Enable ADMA3 mode for data transfer\n"); + host->flags |= SDHCI_USE_ADMA3; + } + if ((host->quirks & SDHCI_QUIRK_BROKEN_ADMA) && (host->flags & SDHCI_USE_ADMA)) { DBG("Disabling ADMA as it is marked broken\n"); - host->flags &= ~SDHCI_USE_ADMA; + host->flags &= ~(SDHCI_USE_ADMA | SDHCI_USE_ADMA3); } if (sdhci_can_64bit_dma(host)) @@ -4048,7 +4056,7 @@ int sdhci_setup_host(struct sdhci_host *host) if (ret) { pr_warn("%s: No suitable DMA available - falling back to PIO\n", mmc_hostname(mmc)); - host->flags &= ~(SDHCI_USE_SDMA | SDHCI_USE_ADMA); + host->flags &= ~(SDHCI_USE_SDMA | SDHCI_USE_ADMA | SDHCI_USE_ADMA3); ret = 0; } @@ -4070,31 +4078,68 @@ int sdhci_setup_host(struct sdhci_host *host) host->desc_sz = host->alloc_desc_sz; host->adma_table_sz = host->adma_table_cnt * host->desc_sz; + host->adma3_table_cnt = 1; + + if (host->flags & SDHCI_USE_ADMA3) { + /* We can package maximum 16 requests once */ + host->adma3_table_cnt = SDHCI_MAX_ADMA3_ENTRIES; + + if (host->flags & SDHCI_USE_64_BIT_DMA) + host->integr_desc_sz = SDHCI_INTEGR_64_DESC_SZ; + else + host->integr_desc_sz = SDHCI_INTEGR_32_DESC_SZ; + + host->cmd_desc_sz = SDHCI_ADMA3_CMD_DESC_SZ; + host->cmd_table_sz = host->adma3_table_cnt * + SDHCI_ADMA3_CMD_DESC_SZ * SDHCI_ADMA3_CMD_DESC_ENTRIES; + + buf = dma_alloc_coherent(mmc_dev(mmc), + host->adma3_table_cnt * + host->integr_desc_sz, + &dma, GFP_KERNEL); + if (!buf) { + pr_warn("%s: Unable to allocate ADMA3 integrated buffers - falling back to ADMA\n", + mmc_hostname(mmc)); + host->flags &= ~SDHCI_USE_ADMA3; + host->adma3_table_cnt = 1; + } else { + host->integr_table = buf; + host->integr_addr = dma; + } + } + host->align_buffer_sz = SDHCI_MAX_SEGS * SDHCI_ADMA2_ALIGN; /* * Use zalloc to zero the reserved high 32-bits of 128-bit * descriptors so that they never need to be written. */ buf = dma_alloc_coherent(mmc_dev(mmc), - host->align_buffer_sz + host->adma_table_sz, + host->align_buffer_sz * + host->adma3_table_cnt + + host->cmd_table_sz + + host->adma_table_sz * + host->adma3_table_cnt, &dma, GFP_KERNEL); if (!buf) { pr_warn("%s: Unable to allocate ADMA buffers - falling back to standard DMA\n", mmc_hostname(mmc)); - host->flags &= ~SDHCI_USE_ADMA; - } else if ((dma + host->align_buffer_sz) & + host->flags &= ~(SDHCI_USE_ADMA | SDHCI_USE_ADMA3); + } else if ((dma + host->align_buffer_sz * host->adma3_table_cnt) & (SDHCI_ADMA2_DESC_ALIGN - 1)) { pr_warn("%s: unable to allocate aligned ADMA descriptor\n", mmc_hostname(mmc)); - host->flags &= ~SDHCI_USE_ADMA; - dma_free_coherent(mmc_dev(mmc), host->align_buffer_sz + - host->adma_table_sz, buf, dma); + host->flags &= ~(SDHCI_USE_ADMA | SDHCI_USE_ADMA3); + dma_free_coherent(mmc_dev(mmc), host->align_buffer_sz * + host->adma3_table_cnt + + host->cmd_table_sz + + host->adma_table_sz * + host->adma3_table_cnt, buf, dma); } else { host->align_buffer = buf; host->align_addr = dma; - host->adma_table = buf + host->align_buffer_sz; - host->adma_addr = dma + host->align_buffer_sz; + host->adma_table = buf + host->align_buffer_sz * host->adma3_table_cnt; + host->adma_addr = dma + host->align_buffer_sz * host->adma3_table_cnt; } } @@ -4495,12 +4540,21 @@ int sdhci_setup_host(struct sdhci_host *host) regulator_disable(mmc->supply.vqmmc); undma: if (host->align_buffer) - dma_free_coherent(mmc_dev(mmc), host->align_buffer_sz + - host->adma_table_sz, host->align_buffer, + dma_free_coherent(mmc_dev(mmc), + host->align_buffer_sz * host->adma3_table_cnt + + host->cmd_table_sz + + host->adma_table_sz * host->adma3_table_cnt, + host->align_buffer, host->align_addr); host->adma_table = NULL; host->align_buffer = NULL; + if (host->integr_table) + dma_free_coherent(mmc_dev(mmc), + host->adma3_table_cnt * host->integr_desc_sz, + host->integr_table, host->integr_addr); + host->integr_table = NULL; + return ret; } EXPORT_SYMBOL_GPL(sdhci_setup_host); @@ -4513,8 +4567,11 @@ void sdhci_cleanup_host(struct sdhci_host *host) regulator_disable(mmc->supply.vqmmc); if (host->align_buffer) - dma_free_coherent(mmc_dev(mmc), host->align_buffer_sz + - host->adma_table_sz, host->align_buffer, + dma_free_coherent(mmc_dev(mmc), + host->align_buffer_sz * host->adma3_table_cnt + + host->cmd_table_sz + + host->adma_table_sz * host->adma3_table_cnt, + host->align_buffer, host->align_addr); if (host->use_external_dma) @@ -4522,6 +4579,12 @@ void sdhci_cleanup_host(struct sdhci_host *host) host->adma_table = NULL; host->align_buffer = NULL; + + if (host->integr_table) + dma_free_coherent(mmc_dev(mmc), + host->adma3_table_cnt * host->integr_desc_sz, + host->integr_table, host->integr_addr); + host->integr_table = NULL; } EXPORT_SYMBOL_GPL(sdhci_cleanup_host); @@ -4650,8 +4713,11 @@ void sdhci_remove_host(struct sdhci_host *host, int dead) regulator_disable(mmc->supply.vqmmc); if (host->align_buffer) - dma_free_coherent(mmc_dev(mmc), host->align_buffer_sz + - host->adma_table_sz, host->align_buffer, + dma_free_coherent(mmc_dev(mmc), + host->align_buffer_sz * host->adma3_table_cnt + + host->cmd_table_sz + + host->adma_table_sz * host->adma3_table_cnt, + host->align_buffer, host->align_addr); if (host->use_external_dma) @@ -4659,6 +4725,12 @@ void sdhci_remove_host(struct sdhci_host *host, int dead) host->adma_table = NULL; host->align_buffer = NULL; + + if (host->integr_table) + dma_free_coherent(mmc_dev(mmc), + host->adma3_table_cnt * host->integr_desc_sz, + host->integr_table, host->integr_addr); + host->integr_table = NULL; } EXPORT_SYMBOL_GPL(sdhci_remove_host); diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h index 5507a73..96aed99 100644 --- a/drivers/mmc/host/sdhci.h +++ b/drivers/mmc/host/sdhci.h @@ -274,6 +274,9 @@ #define SDHCI_PRESET_SDCLK_FREQ_MASK 0x3FF #define SDHCI_PRESET_SDCLK_FREQ_SHIFT 0 +#define SDHCI_ADMA3_ADDRESS 0x78 +#define SDHCI_ADMA3_ADDRESS_HI 0x7c + #define SDHCI_SLOT_INT_STATUS 0xFC #define SDHCI_HOST_VERSION 0xFE @@ -346,6 +349,41 @@ struct sdhci_adma2_64_desc { #define ADMA2_NOP_END_VALID 0x3 #define ADMA2_END 0x2 +#define SDHCI_MAX_ADMA3_ENTRIES 16 + +/* ADMA3 command descriptor */ +struct sdhci_adma3_cmd_desc { + __le32 cmd; + __le32 reg; +} __packed __aligned(4); + +#define ADMA3_TRAN_VALID 0x9 +#define ADMA3_TRAN_END 0xb + +/* ADMA3 command descriptor size */ +#define SDHCI_ADMA3_CMD_DESC_ENTRIES 4 +#define SDHCI_ADMA3_CMD_DESC_SZ 8 + +/* ADMA3 integrated 32-bit descriptor */ +struct sdhci_integr_32_desc { + __le32 cmd; + __le32 addr; +} __packed __aligned(4); + +#define SDHCI_INTEGR_32_DESC_SZ 8 + +/* ADMA3 integrated 64-bit descriptor. */ +struct sdhci_integr_64_desc { + __le32 cmd; + __le32 addr_lo; + __le32 addr_hi; +} __packed __aligned(4); + +#define SDHCI_INTEGR_64_DESC_SZ 16 + +#define ADMA3_INTEGR_TRAN_VALID 0x39 +#define ADMA3_INTEGR_TRAN_END 0x3b + /* * Maximum segments assuming a 512KiB maximum requisition size and a minimum * 4KiB page size. @@ -484,6 +522,8 @@ struct sdhci_host { * block count. */ #define SDHCI_QUIRK2_USE_32BIT_BLK_CNT (1<<18) +/* use ADMA3 for data read/write if hardware supports */ +#define SDHCI_QUIRK2_USE_ADMA3_SUPPORT (1<<19) int irq; /* Device IRQ */ void __iomem *ioaddr; /* Mapped address */ @@ -520,6 +560,7 @@ struct sdhci_host { #define SDHCI_SIGNALING_330 (1<<14) /* Host is capable of 3.3V signaling */ #define SDHCI_SIGNALING_180 (1<<15) /* Host is capable of 1.8V signaling */ #define SDHCI_SIGNALING_120 (1<<16) /* Host is capable of 1.2V signaling */ +#define SDHCI_USE_ADMA3 (1<<17) /* Host is ADMA3 capable */ unsigned int version; /* SDHCI spec. version */ @@ -552,15 +593,20 @@ struct sdhci_host { void *adma_table; /* ADMA descriptor table */ void *align_buffer; /* Bounce buffer */ + void *integr_table; /* ADMA3 intergrate descriptor table */ size_t adma_table_sz; /* ADMA descriptor table size */ size_t align_buffer_sz; /* Bounce buffer size */ + size_t cmd_table_sz; /* ADMA3 command descriptor table size */ dma_addr_t adma_addr; /* Mapped ADMA descr. table */ dma_addr_t align_addr; /* Mapped bounce buffer */ + dma_addr_t integr_addr; /* Mapped ADMA3 intergrate descr. table */ unsigned int desc_sz; /* ADMA current descriptor size */ unsigned int alloc_desc_sz; /* ADMA descr. max size host supports */ + unsigned int cmd_desc_sz; /* ADMA3 command descriptor size */ + unsigned int integr_desc_sz; /* ADMA3 intergrate descriptor size */ struct workqueue_struct *complete_wq; /* Request completion wq */ struct work_struct complete_work; /* Request completion work */ @@ -611,6 +657,8 @@ struct sdhci_host { /* Host ADMA table count */ u32 adma_table_cnt; + /* Host ADMA3 table count */ + u32 adma3_table_cnt; u64 data_timeout; From patchwork Mon Mar 16 10:01:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 11440047 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 57E0E6CA for ; Mon, 16 Mar 2020 10:02:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2EC3D2073C for ; Mon, 16 Mar 2020 10:02:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="SzBGmC2x" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730590AbgCPKB6 (ORCPT ); Mon, 16 Mar 2020 06:01:58 -0400 Received: from mail-pg1-f196.google.com ([209.85.215.196]:39478 "EHLO mail-pg1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730578AbgCPKB6 (ORCPT ); Mon, 16 Mar 2020 06:01:58 -0400 Received: by mail-pg1-f196.google.com with SMTP id b22so3465273pgb.6; Mon, 16 Mar 2020 03:01:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=Q4WERU23Ab7PcOBZcBpvQyi0PCrcF6eJaP+MB28aUbM=; b=SzBGmC2xzyNqEKrq6zmZ/gWFxo0axM7r5PoB4ZwwLa3UCvtF7iagYTXmDw2qwnZl4i UYVwBE82t0vztwiTjzcaVqBdTVKFUNHz8wYcgWBcuDhqsUlAxdUnt/DFBR91js6Mg0tC QDZqI7lhpCIMIfVmyRzh1UOYhIXBVvr0xFb8lMgOqSf4UyTuEUlTEArpcYTJVIPiGGB1 77C9KW4gV0GYi60w0vtRFbwHkMuRTNuf4NAfeYpN71svZdGX0cW6ZRQXAQoXFc3Q9Hib gjSGmTQ9XGMsvAymXl23yLdbdMh4MRYmvQUM25qgRYmrXBZE5KyFSet+rLNOfcaKbe4m xP1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=Q4WERU23Ab7PcOBZcBpvQyi0PCrcF6eJaP+MB28aUbM=; b=X2Iuugui9LE2Rs0C9aoB3QWDe/3hcoVscoSKN9tzX5pr7EotJjdo/V1vI9LU+ATzQd VcDBaF1va0nrOTMxsw6z3qK7aOlxEMeDZnNLZ/3owg88jKm7ECH2GSk5LeKOF5VCbxLm r4B52coRwM6UzLtS+QacKgSWB4V8Wul+p8oEL9WdBrsaHCuKrkdXK/LtA89tAVM3fEV8 7C9Nwrh66UCPAhrDTRkr3zv9FfrUjLgd0L8/0GDZuarR5Kcp9tNGmYPNLRTnRh5qALcH c4wxXaGTG9Co96liYmV1mu1GuIpO1hDepHOIM/cB09mC0bx4vRdsGsxIbvjklnxPrtUO uPng== X-Gm-Message-State: ANhLgQ1CisC2l9gCoyPOxZ7nqqDeXU9gW0qVwR3nLmIcIgEqCyrYiyOM FCLsJ2nMUlQ3AegIQlqY5ke4hZzp X-Google-Smtp-Source: ADFU+vsOjwgwiCflCAIn/1XeF2qeFlGf02DBu2ZzA6GEmI3RFf8htlNOgtkFopSKF3oVBRfW0yySEg== X-Received: by 2002:a63:7c4:: with SMTP id 187mr25515308pgh.369.1584352917078; Mon, 16 Mar 2020 03:01:57 -0700 (PDT) Received: from sh03840pcu.spreadtrum.com ([117.18.48.82]) by smtp.gmail.com with ESMTPSA id 136sm63970411pgh.26.2020.03.16.03.01.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 16 Mar 2020 03:01:56 -0700 (PDT) From: Baolin Wang To: axboe@kernel.dk, paolo.valente@linaro.org, ulf.hansson@linaro.org, adrian.hunter@intel.com Cc: ming.lei@redhat.com, arnd@arndb.de, linus.walleij@linaro.org, baolin.wang7@gmail.com, orsonzhai@gmail.com, zhang.lyra@gmail.com, linux-mmc@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RESEND RFC PATCH 5/8] mmc: host: sdhci: Factor out the command configuration Date: Mon, 16 Mar 2020 18:01:22 +0800 Message-Id: X-Mailer: git-send-email 1.9.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org Move the SD command configuration into one separate function to simplify the sdhci_send_command(). Moreover this function can be used to support ADMA3 transfer mode in following patches. Signed-off-by: Baolin Wang --- drivers/mmc/host/sdhci.c | 65 ++++++++++++++++++++++++++++-------------------- 1 file changed, 38 insertions(+), 27 deletions(-) diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c index 6238b5c..4de0f48 100644 --- a/drivers/mmc/host/sdhci.c +++ b/drivers/mmc/host/sdhci.c @@ -1546,9 +1546,43 @@ static void sdhci_finish_data(struct sdhci_host *host) } } -void sdhci_send_command(struct sdhci_host *host, struct mmc_command *cmd) +static int sdhci_get_command(struct sdhci_host *host, struct mmc_command *cmd) { int flags; + + if ((cmd->flags & MMC_RSP_136) && (cmd->flags & MMC_RSP_BUSY)) { + pr_err("%s: Unsupported response type!\n", + mmc_hostname(host->mmc)); + cmd->error = -EINVAL; + sdhci_finish_mrq(host, cmd->mrq); + return -EINVAL; + } + + if (!(cmd->flags & MMC_RSP_PRESENT)) + flags = SDHCI_CMD_RESP_NONE; + else if (cmd->flags & MMC_RSP_136) + flags = SDHCI_CMD_RESP_LONG; + else if (cmd->flags & MMC_RSP_BUSY) + flags = SDHCI_CMD_RESP_SHORT_BUSY; + else + flags = SDHCI_CMD_RESP_SHORT; + + if (cmd->flags & MMC_RSP_CRC) + flags |= SDHCI_CMD_CRC; + if (cmd->flags & MMC_RSP_OPCODE) + flags |= SDHCI_CMD_INDEX; + + /* CMD19 is special in that the Data Present Select should be set */ + if (cmd->data || cmd->opcode == MMC_SEND_TUNING_BLOCK || + cmd->opcode == MMC_SEND_TUNING_BLOCK_HS200) + flags |= SDHCI_CMD_DATA; + + return SDHCI_MAKE_CMD(cmd->opcode, flags); +} + +void sdhci_send_command(struct sdhci_host *host, struct mmc_command *cmd) +{ + int command; u32 mask; unsigned long timeout; @@ -1605,32 +1639,9 @@ void sdhci_send_command(struct sdhci_host *host, struct mmc_command *cmd) sdhci_set_transfer_mode(host, cmd); - if ((cmd->flags & MMC_RSP_136) && (cmd->flags & MMC_RSP_BUSY)) { - pr_err("%s: Unsupported response type!\n", - mmc_hostname(host->mmc)); - cmd->error = -EINVAL; - sdhci_finish_mrq(host, cmd->mrq); + command = sdhci_get_command(host, cmd); + if (command < 0) return; - } - - if (!(cmd->flags & MMC_RSP_PRESENT)) - flags = SDHCI_CMD_RESP_NONE; - else if (cmd->flags & MMC_RSP_136) - flags = SDHCI_CMD_RESP_LONG; - else if (cmd->flags & MMC_RSP_BUSY) - flags = SDHCI_CMD_RESP_SHORT_BUSY; - else - flags = SDHCI_CMD_RESP_SHORT; - - if (cmd->flags & MMC_RSP_CRC) - flags |= SDHCI_CMD_CRC; - if (cmd->flags & MMC_RSP_OPCODE) - flags |= SDHCI_CMD_INDEX; - - /* CMD19 is special in that the Data Present Select should be set */ - if (cmd->data || cmd->opcode == MMC_SEND_TUNING_BLOCK || - cmd->opcode == MMC_SEND_TUNING_BLOCK_HS200) - flags |= SDHCI_CMD_DATA; timeout = jiffies; if (host->data_timeout) @@ -1644,7 +1655,7 @@ void sdhci_send_command(struct sdhci_host *host, struct mmc_command *cmd) if (host->use_external_dma) sdhci_external_dma_pre_transfer(host, cmd); - sdhci_writew(host, SDHCI_MAKE_CMD(cmd->opcode, flags), SDHCI_COMMAND); + sdhci_writew(host, command, SDHCI_COMMAND); } EXPORT_SYMBOL_GPL(sdhci_send_command); From patchwork Mon Mar 16 10:01:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 11440051 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0198414B4 for ; Mon, 16 Mar 2020 10:02:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D50E62073E for ; Mon, 16 Mar 2020 10:02:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="tjmMvZhx" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730601AbgCPKCC (ORCPT ); Mon, 16 Mar 2020 06:02:02 -0400 Received: from mail-pf1-f193.google.com ([209.85.210.193]:34349 "EHLO mail-pf1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730578AbgCPKCB (ORCPT ); Mon, 16 Mar 2020 06:02:01 -0400 Received: by mail-pf1-f193.google.com with SMTP id 23so9661060pfj.1; Mon, 16 Mar 2020 03:02:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=Jgo0d3vEPmoCBuLf1d1sp68b+JTwTvOBHLLcoJyrr8E=; b=tjmMvZhxu/mSrGirbVRx5fo8xS8jT6FjQPq/z4yNglpRg0FIcybH7NNG0PLb5e90Xe clamxokZsdGyj5wL/l4O/CJB+9ub1g797PKktMbqfuO6cPkLAbncWZ4tTjSKyc/61HbG uH7hy+utSTf00J7dds3dUmE+tHuPoOCerpZbNun4R9veCI2Y4XhAfQ3wHo1tbE9U2qX4 D2Ag7JyvH86FTnteCnKeMhnpJCHc4wxoP+9DfvKw7LHsWNGMlHqYp7+TEMGkEIBx6mv3 3mMclnASsphfiY/Ialioxa/Rm9UQB6JCchh9gMZS0gbKsGR/qkyOv8Wxa+WfMdtJySyY D2Mw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=Jgo0d3vEPmoCBuLf1d1sp68b+JTwTvOBHLLcoJyrr8E=; b=mXFbFBS0+X30kk9I9yYnV/xTQe18pqAN9UFA+DAmn9M2BqxSHCSLm7r0mZcbd2c2EM bU57BUuH+PlcR632v+nLLynCNPVPV5H6kZ5I1i/CnmPJobEh+QVcEQhm6AmoSkjWoG40 acD1udKOi8GVgZ6AEmH77PFs6smbp8npJBOmV6BSOnxBwcCZaxenTFlRlUfwX7j/EIkt 1FJVr42At5FAS+EfkUHehMjsaEVeZ95Od9oag6A3o82+J//rCZL2uvatYkIinFLfDWlW yVcp0a0KF24myj9cVkoXuffbLPqZyfLJ/PkeLhFo04Iuc38OOFKglGuA875mp791m2ek VtZw== X-Gm-Message-State: ANhLgQ05VQxr29y829HVZD1ZoLhnmYqmPK0e7bUI2hYpMyGeKiSK8VGT ZOT3Q+WWmY6uOmBrJwDptbA= X-Google-Smtp-Source: ADFU+vvJKjfSS2AVPuGPLSABsTJewnEtqB8dX+PErlGFo8y6OtwPJGrEg2007aWa0Uf5HlChZ5s8Pg== X-Received: by 2002:a65:4d0e:: with SMTP id i14mr11465430pgt.358.1584352920735; Mon, 16 Mar 2020 03:02:00 -0700 (PDT) Received: from sh03840pcu.spreadtrum.com ([117.18.48.82]) by smtp.gmail.com with ESMTPSA id 136sm63970411pgh.26.2020.03.16.03.01.57 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 16 Mar 2020 03:02:00 -0700 (PDT) From: Baolin Wang To: axboe@kernel.dk, paolo.valente@linaro.org, ulf.hansson@linaro.org, adrian.hunter@intel.com Cc: ming.lei@redhat.com, arnd@arndb.de, linus.walleij@linaro.org, baolin.wang7@gmail.com, orsonzhai@gmail.com, zhang.lyra@gmail.com, linux-mmc@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RESEND RFC PATCH 6/8] mmc: host: sdhci: Remove redundant sg_count member of struct sdhci_host Date: Mon, 16 Mar 2020 18:01:23 +0800 Message-Id: <537851f05fb3411859ff0196433635dbaf8b59aa.1584350380.git.baolin.wang7@gmail.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org The mmc_data structure has a member to save the mapped sg count, so no need introduce a redundant sg_count of struct sdhci_host, remove it. This is also a preparation patch to support ADMA3 transfer mode. Signed-off-by: Baolin Wang --- drivers/mmc/host/sdhci.c | 12 +++++------- drivers/mmc/host/sdhci.h | 2 -- 2 files changed, 5 insertions(+), 9 deletions(-) diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c index 4de0f48..6d6f450 100644 --- a/drivers/mmc/host/sdhci.c +++ b/drivers/mmc/host/sdhci.c @@ -708,7 +708,7 @@ static void sdhci_adma_mark_end(void *desc) } static void sdhci_adma_table_pre(struct sdhci_host *host, - struct mmc_data *data, int sg_count) + struct mmc_data *data) { struct scatterlist *sg; unsigned long flags; @@ -722,14 +722,12 @@ static void sdhci_adma_table_pre(struct sdhci_host *host, * We currently guess that it is LE. */ - host->sg_count = sg_count; - desc = host->adma_table; align = host->align_buffer; align_addr = host->align_addr; - for_each_sg(data->sg, sg, host->sg_count, i) { + for_each_sg(data->sg, sg, data->sg_count, i) { addr = sg_dma_address(sg); len = sg_dma_len(sg); @@ -801,7 +799,7 @@ static void sdhci_adma_table_post(struct sdhci_host *host, bool has_unaligned = false; /* Do a quick scan of the SG list for any unaligned mappings */ - for_each_sg(data->sg, sg, host->sg_count, i) + for_each_sg(data->sg, sg, data->sg_count, i) if (sg_dma_address(sg) & SDHCI_ADMA2_MASK) { has_unaligned = true; break; @@ -813,7 +811,7 @@ static void sdhci_adma_table_post(struct sdhci_host *host, align = host->align_buffer; - for_each_sg(data->sg, sg, host->sg_count, i) { + for_each_sg(data->sg, sg, data->sg_count, i) { if (sg_dma_address(sg) & SDHCI_ADMA2_MASK) { size = SDHCI_ADMA2_ALIGN - (sg_dma_address(sg) & SDHCI_ADMA2_MASK); @@ -1133,7 +1131,7 @@ static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd) WARN_ON(1); host->flags &= ~SDHCI_REQ_USE_DMA; } else if (host->flags & SDHCI_USE_ADMA) { - sdhci_adma_table_pre(host, data, sg_cnt); + sdhci_adma_table_pre(host, data); sdhci_set_adma_addr(host, host->adma_addr); } else { WARN_ON(sg_cnt != 1); diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h index 96aed99..f33830b 100644 --- a/drivers/mmc/host/sdhci.h +++ b/drivers/mmc/host/sdhci.h @@ -589,8 +589,6 @@ struct sdhci_host { struct sg_mapping_iter sg_miter; /* SG state for PIO */ unsigned int blocks; /* remaining PIO blocks */ - int sg_count; /* Mapped sg entries */ - void *adma_table; /* ADMA descriptor table */ void *align_buffer; /* Bounce buffer */ void *integr_table; /* ADMA3 intergrate descriptor table */ From patchwork Mon Mar 16 10:01:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 11440061 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 298BC14B4 for ; Mon, 16 Mar 2020 10:02:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id ED1882073C for ; Mon, 16 Mar 2020 10:02:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="YM5mP+2i" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730618AbgCPKCH (ORCPT ); Mon, 16 Mar 2020 06:02:07 -0400 Received: from mail-pj1-f66.google.com ([209.85.216.66]:33020 "EHLO mail-pj1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730534AbgCPKCG (ORCPT ); Mon, 16 Mar 2020 06:02:06 -0400 Received: by mail-pj1-f66.google.com with SMTP id dw20so3886814pjb.0; Mon, 16 Mar 2020 03:02:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=I0LdLqGPiKxYvJAfb6pGeFPkfbn8lz2wPqpXBWx4WLg=; b=YM5mP+2iOYAqt8ZEZk3H12rJW/c0KcYvGiTVH0TTxg+Rbpjxj574Jg+1MQ7agrLEjz UK057a9tm3iVLuht1612rjedx/Tzffic3OKgxPiN3Xo1fSKDzW5N5TEkBuX+0GIzdQvK 8DWRKTnYtGz5oFrKIzeSUUB5lnO+Oq8Pe5BtLA2C/2fWWnuOIXIQvXRqokdph2CkeYPy jKeW0Zun9ldHTLJQLmzitB+9bHzPnKK/ipJDiZ70ciNWeCKCDF4qLH5nTbg8L97ib6Zk Z34g/AogK/6P/gjlGqwbMOAczpxLxCVdbUDEzHXG2nvDNu33tY2yyPFkprsTjOaEwztE Mdwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=I0LdLqGPiKxYvJAfb6pGeFPkfbn8lz2wPqpXBWx4WLg=; b=GNc/vG7gI1KCublfjASjNLcCyVCdqd6jAs+jZ6ovPgYzUItM0aoJUqBytEZUA6trqb 5knC3OOWlpfKAkMXkapjzfm7ZByW1GdqMI/jiMdm5gq/WeYbydScN//fPaPYr3s00s0g i6IFKKekDNQJG+GYLAwFX2wx8qE0XWOOukxXqy94iWNLLh805EqkLkQxMG+vtDvKsW16 sc/9kdjxCM8zlwgfV2QFfXVSHSWlGecQg5s7o4KIyIy8ZGuAGeXTUfV3vCN91aGAYM2m z5Ivnlcqcfsfl9spqqTWHXZlr3az8whFWfGOChwM8uWg2qB6HMUy+XNs7b03vSOBRpT0 QYaA== X-Gm-Message-State: ANhLgQ3TxT5As8DPQYVRLhq2sVfCwDder4J7+rDkWXbdrX/kfR7QEcA7 7nMofSH66OYP5/IlwElKJ2w= X-Google-Smtp-Source: ADFU+vubWfRHng6wFVn6dni/xWntuWsR7ocB63fOrhS8OC/ZqkgkRxzUomeAmgKAxD1UdJpW0ObkdA== X-Received: by 2002:a17:902:70c4:: with SMTP id l4mr25239336plt.136.1584352924995; Mon, 16 Mar 2020 03:02:04 -0700 (PDT) Received: from sh03840pcu.spreadtrum.com ([117.18.48.82]) by smtp.gmail.com with ESMTPSA id 136sm63970411pgh.26.2020.03.16.03.02.00 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 16 Mar 2020 03:02:04 -0700 (PDT) From: Baolin Wang To: axboe@kernel.dk, paolo.valente@linaro.org, ulf.hansson@linaro.org, adrian.hunter@intel.com Cc: ming.lei@redhat.com, arnd@arndb.de, linus.walleij@linaro.org, baolin.wang7@gmail.com, orsonzhai@gmail.com, zhang.lyra@gmail.com, linux-mmc@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RESEND RFC PATCH 7/8] mmc: host: sdhci: Add MMC packed request support Date: Mon, 16 Mar 2020 18:01:24 +0800 Message-Id: <8eec1acda41e51706baab3b3e470539517c1fa7c.1584350380.git.baolin.wang7@gmail.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org This patch adds MMC packed operations to support packed requests, and enables ADMA3 transfer mode to support this feature. Enable ADMA3 transfer mode only for read and write commands, and we will disable command interrupt and data timeout interrupt, instead we will use software data timeout for ADMA3 fransfer mode. For other non-data commands, we still use the ADMA2 transfer, since no bebefits using ADMA3 transfer. Signed-off-by: Baolin Wang --- drivers/mmc/host/sdhci.c | 316 +++++++++++++++++++++++++++++++++++++++++++++-- drivers/mmc/host/sdhci.h | 11 ++ 2 files changed, 315 insertions(+), 12 deletions(-) diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c index 6d6f450..5f1f157 100644 --- a/drivers/mmc/host/sdhci.c +++ b/drivers/mmc/host/sdhci.c @@ -110,6 +110,19 @@ void sdhci_dumpregs(struct sdhci_host *host) } } + if (host->adma3_enabled) { + if (host->flags & SDHCI_USE_64_BIT_DMA) { + SDHCI_DUMP("ADMA3 Err: 0x%08x | ADMA3 Ptr: 0x%08x%08x\n", + sdhci_readl(host, SDHCI_ADMA_ERROR), + sdhci_readl(host, SDHCI_ADMA3_ADDRESS_HI), + sdhci_readl(host, SDHCI_ADMA3_ADDRESS)); + } else { + SDHCI_DUMP("ADMA3 Err: 0x%08x | ADMA3 Ptr: 0x%08x\n", + sdhci_readl(host, SDHCI_ADMA_ERROR), + sdhci_readl(host, SDHCI_ADMA3_ADDRESS)); + } + } + SDHCI_DUMP("============================================\n"); } EXPORT_SYMBOL_GPL(sdhci_dumpregs); @@ -287,7 +300,9 @@ static void sdhci_config_dma(struct sdhci_host *host) goto out; /* Note if DMA Select is zero then SDMA is selected */ - if (host->flags & SDHCI_USE_ADMA) + if (host->adma3_enabled) + ctrl |= SDHCI_CTRL_ADMA3; + else if (host->flags & SDHCI_USE_ADMA) ctrl |= SDHCI_CTRL_ADMA32; if (host->flags & SDHCI_USE_64_BIT_DMA) { @@ -457,7 +472,7 @@ static inline void sdhci_led_deactivate(struct sdhci_host *host) static void sdhci_mod_timer(struct sdhci_host *host, struct mmc_request *mrq, unsigned long timeout) { - if (sdhci_data_line_cmd(mrq->cmd)) + if (host->prq || sdhci_data_line_cmd(mrq->cmd)) mod_timer(&host->data_timer, timeout); else mod_timer(&host->timer, timeout); @@ -465,7 +480,7 @@ static void sdhci_mod_timer(struct sdhci_host *host, struct mmc_request *mrq, static void sdhci_del_timer(struct sdhci_host *host, struct mmc_request *mrq) { - if (sdhci_data_line_cmd(mrq->cmd)) + if (host->prq || sdhci_data_line_cmd(mrq->cmd)) del_timer(&host->data_timer); else del_timer(&host->timer); @@ -722,10 +737,16 @@ static void sdhci_adma_table_pre(struct sdhci_host *host, * We currently guess that it is LE. */ - desc = host->adma_table; - align = host->align_buffer; - - align_addr = host->align_addr; + if (host->adma3_enabled) { + desc = host->adma3_pos; + align = host->adma3_align_pos; + align_addr = host->align_addr + + host->adma3_align_pos - host->align_buffer; + } else { + desc = host->adma_table; + align = host->align_buffer; + align_addr = host->align_addr; + } for_each_sg(data->sg, sg, data->sg_count, i) { addr = sg_dma_address(sg); @@ -784,6 +805,11 @@ static void sdhci_adma_table_pre(struct sdhci_host *host, /* Add a terminating entry - nop, end, valid */ __sdhci_adma_write_desc(host, &desc, 0, 0, ADMA2_NOP_END_VALID); } + + if (host->adma3_enabled) { + host->adma3_pos = desc; + host->adma3_align_pos = align; + } } static void sdhci_adma_table_post(struct sdhci_host *host, @@ -809,7 +835,10 @@ static void sdhci_adma_table_post(struct sdhci_host *host, dma_sync_sg_for_cpu(mmc_dev(host->mmc), data->sg, data->sg_len, DMA_FROM_DEVICE); - align = host->align_buffer; + if (host->adma3_enabled) + align = host->adma3_align_pos; + else + align = host->align_buffer; for_each_sg(data->sg, sg, data->sg_count, i) { if (sg_dma_address(sg) & SDHCI_ADMA2_MASK) { @@ -823,6 +852,9 @@ static void sdhci_adma_table_post(struct sdhci_host *host, align += SDHCI_ADMA2_ALIGN; } } + + if (host->adma3_enabled) + host->adma3_align_pos = align; } } } @@ -1031,7 +1063,7 @@ static void sdhci_set_timeout(struct sdhci_host *host, struct mmc_command *cmd) static void sdhci_initialize_data(struct sdhci_host *host, struct mmc_data *data) { - WARN_ON(host->data); + WARN_ON(!host->prq && host->data); /* Sanity checks */ BUG_ON(data->blksz * data->blocks > 524288); @@ -1132,7 +1164,8 @@ static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd) host->flags &= ~SDHCI_REQ_USE_DMA; } else if (host->flags & SDHCI_USE_ADMA) { sdhci_adma_table_pre(host, data); - sdhci_set_adma_addr(host, host->adma_addr); + if (!host->adma3_enabled) + sdhci_set_adma_addr(host, host->adma_addr); } else { WARN_ON(sg_cnt != 1); sdhci_set_sdma_addr(host, sdhci_sdma_address(host)); @@ -1155,6 +1188,9 @@ static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd) sdhci_set_transfer_irqs(host); + if (host->adma3_enabled) + return; + sdhci_set_block_info(host, data); } @@ -1485,6 +1521,36 @@ static void sdhci_finish_mrq(struct sdhci_host *host, struct mmc_request *mrq) queue_work(host->complete_wq, &host->complete_work); } +static void sdhci_finish_packed_data(struct sdhci_host *host, int error) +{ + struct mmc_request *mrq; + + host->data = NULL; + /* + * Reset the align buffer pointer address for unaligned mappings after + * finishing the transfer. + */ + host->adma3_align_pos = host->align_buffer; + + if (error) + sdhci_do_reset(host, SDHCI_RESET_CMD | SDHCI_RESET_DATA); + + list_for_each_entry(mrq, &host->prq->list, list) { + struct mmc_data *data = mrq->data; + + sdhci_adma_table_post(host, data); + data->error = error; + + if (data->error) + data->bytes_xfered = 0; + else + data->bytes_xfered = data->blksz * data->blocks; + } + + sdhci_del_timer(host, NULL); + sdhci_led_deactivate(host); +} + static void sdhci_finish_data(struct sdhci_host *host) { struct mmc_command *data_cmd = host->data_cmd; @@ -1620,7 +1686,7 @@ void sdhci_send_command(struct sdhci_host *host, struct mmc_command *cmd) host->cmd = cmd; host->data_timeout = 0; - if (sdhci_data_line_cmd(cmd)) { + if (!host->prq && sdhci_data_line_cmd(cmd)) { WARN_ON(host->data_cmd); host->data_cmd = cmd; sdhci_set_timeout(host, cmd); @@ -2026,6 +2092,206 @@ void sdhci_set_power(struct sdhci_host *host, unsigned char mode, * * \*****************************************************************************/ +static void sdhci_adma3_write_cmd_desc(struct sdhci_host *host, + struct mmc_command *cmd) +{ + struct mmc_data *data = cmd->data; + struct sdhci_adma3_cmd_desc *cmd_desc = host->adma3_pos; + int blksz, command; + u16 mode = 0; + + /* Set block count */ + cmd_desc->cmd = cpu_to_le32(ADMA3_TRAN_VALID); + cmd_desc->reg = cpu_to_le32(data->blocks); + cmd_desc++; + + /* Set block size */ + cmd_desc->cmd = cpu_to_le32(ADMA3_TRAN_VALID); + blksz = SDHCI_MAKE_BLKSZ(host->sdma_boundary, data->blksz); + cmd_desc->reg = cpu_to_le32(blksz); + cmd_desc++; + + /* Set argument */ + cmd_desc->cmd = cpu_to_le32(ADMA3_TRAN_VALID); + cmd_desc->reg = cpu_to_le32(cmd->arg); + cmd_desc++; + + /* set command and transfer mode */ + if (data->flags & MMC_DATA_READ) + mode |= SDHCI_TRNS_READ; + + if (!(host->quirks2 & SDHCI_QUIRK2_SUPPORT_SINGLE)) + mode |= SDHCI_TRNS_BLK_CNT_EN; + + if (mmc_op_multi(cmd->opcode) || data->blocks > 1) + mode |= SDHCI_TRNS_MULTI; + + sdhci_auto_cmd_select(host, cmd, &mode); + mode |= SDHCI_TRNS_DMA; + + command = sdhci_get_command(host, cmd); + command = (command << 16) | mode; + cmd_desc->cmd = cpu_to_le32(ADMA3_TRAN_END); + cmd_desc->reg = cpu_to_le32(command); + + host->adma3_pos += + SDHCI_ADMA3_CMD_DESC_SZ * SDHCI_ADMA3_CMD_DESC_ENTRIES; +} + +static void sdhci_adma3_write_integr_desc(struct sdhci_host *host, + dma_addr_t addr) +{ + struct sdhci_integr_64_desc *integr_desc = host->integr_table; + + integr_desc->cmd = cpu_to_le32(ADMA3_INTEGR_TRAN_END); + integr_desc->addr_lo = cpu_to_le32((u32)addr); + + if (host->flags & SDHCI_USE_64_BIT_DMA) + integr_desc->addr_hi = cpu_to_le32((u64)addr >> 32); +} + +static void sdhci_set_adma3_addr(struct sdhci_host *host, dma_addr_t addr) +{ + sdhci_writel(host, addr, SDHCI_ADMA3_ADDRESS); + if (host->flags & SDHCI_USE_64_BIT_DMA) + sdhci_writel(host, (u64)addr >> 32, SDHCI_ADMA3_ADDRESS_HI); +} + +int sdhci_prepare_packed(struct mmc_host *mmc) +{ + struct sdhci_host *host = mmc_priv(mmc); + unsigned long timeout, flags; + u32 mask; + + spin_lock_irqsave(&host->lock, flags); + + if (!(host->flags & SDHCI_USE_ADMA3) || + !(host->flags & (SDHCI_AUTO_CMD23 | SDHCI_AUTO_CMD12))) { + spin_unlock_irqrestore(&host->lock, flags); + pr_err("%s: Unsupported packed request\n", + mmc_hostname(host->mmc)); + return -EOPNOTSUPP; + } + + /* Wait max 10 ms */ + timeout = 10; + mask = SDHCI_CMD_INHIBIT | SDHCI_DATA_INHIBIT; + + while (sdhci_readl(host, SDHCI_PRESENT_STATE) & mask) { + if (timeout == 0) { + sdhci_dumpregs(host); + spin_unlock_irqrestore(&host->lock, flags); + + pr_err("%s: Controller never released inhibit bit(s).\n", + mmc_hostname(host->mmc)); + return -EIO; + } + + timeout--; + mdelay(1); + } + + /* Disable command complete event for ADMA3 mode */ + host->ier &= ~SDHCI_INT_RESPONSE; + sdhci_writel(host, host->ier, SDHCI_INT_ENABLE); + sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE); + + /* + * Disable data timeout interrupt, and will use software timeout for + * packed request. + */ + sdhci_set_data_timeout_irq(host, false); + + /* Enable ADMA3 mode for packed request */ + host->adma3_enabled = true; + + spin_unlock_irqrestore(&host->lock, flags); + + return 0; +} + +int sdhci_unprepare_packed(struct mmc_host *mmc) +{ + struct sdhci_host *host = mmc_priv(mmc); + unsigned long flags; + + spin_lock_irqsave(&host->lock, flags); + + /* Disable ADMA3 mode after finishing packed request */ + host->adma3_enabled = false; + + /* Re-enable command complete event after ADMA3 mode */ + host->ier |= SDHCI_INT_RESPONSE; + + sdhci_writel(host, host->ier, SDHCI_INT_ENABLE); + sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE); + spin_unlock_irqrestore(&host->lock, flags); + + return 0; +} + +int sdhci_packed_request(struct mmc_host *mmc, + struct mmc_packed_request *prq) +{ + struct sdhci_host *host = mmc_priv(mmc); + struct mmc_request *mrq; + unsigned long timeout, flags; + u64 data_timeout = 0; + dma_addr_t integr_addr; + int present; + + /* Firstly check card presence */ + present = mmc->ops->get_cd(mmc); + + spin_lock_irqsave(&host->lock, flags); + + sdhci_led_activate(host); + + if (!present || host->flags & SDHCI_DEVICE_DEAD) { + spin_unlock_irqrestore(&host->lock, flags); + return -ENOMEDIUM; + } + + host->prq = prq; + host->adma3_pos = host->adma_table; + host->adma3_align_pos = host->align_buffer; + integr_addr = host->adma_addr; + + list_for_each_entry(mrq, &prq->list, list) { + struct mmc_command *cmd = mrq->cmd; + + /* Set command descriptor */ + sdhci_adma3_write_cmd_desc(host, cmd); + /* Set ADMA2 descriptors */ + sdhci_prepare_data(host, cmd); + /* Set integrated descriptor */ + sdhci_adma3_write_integr_desc(host, integr_addr); + + /* Update the integrated descriptor address */ + integr_addr = + host->adma_addr + (host->adma3_pos - host->adma_table); + + /* Calculate each command's data timeout */ + sdhci_calc_sw_timeout(host, cmd); + data_timeout += host->data_timeout; + } + + timeout = jiffies; + if (data_timeout) + timeout += nsecs_to_jiffies(data_timeout); + else + timeout += 10 * HZ * prq->nr_reqs; + sdhci_mod_timer(host, NULL, timeout); + + /* Start ADMA3 transfer */ + sdhci_set_adma3_addr(host, host->integr_addr); + + spin_unlock_irqrestore(&host->lock, flags); + + return 0; +} +EXPORT_SYMBOL_GPL(sdhci_packed_request); + static void sdhci_start_request(struct mmc_host *mmc, struct mmc_request *mrq, int present) { @@ -2854,9 +3120,19 @@ static bool sdhci_request_done(struct sdhci_host *host) { unsigned long flags; struct mmc_request *mrq; + struct mmc_packed_request *prq; int i; spin_lock_irqsave(&host->lock, flags); + prq = host->prq; + + if (prq) { + host->prq = NULL; + spin_unlock_irqrestore(&host->lock, flags); + + host->ops->packed_request_done(host, prq); + return true; + } for (i = 0; i < SDHCI_MAX_MRQS; i++) { mrq = host->mrqs_done[i]; @@ -3012,6 +3288,17 @@ static void sdhci_timeout_data_timer(struct timer_list *t) spin_lock_irqsave(&host->lock, flags); + if (host->prq) { + pr_err("%s: Packed requests timeout for hardware interrupt.\n", + mmc_hostname(host->mmc)); + sdhci_dumpregs(host); + sdhci_finish_packed_data(host, -ETIMEDOUT); + queue_work(host->complete_wq, &host->complete_work); + spin_unlock_irqrestore(&host->lock, flags); + + return; + } + if (host->data || host->data_cmd || (host->cmd && sdhci_data_line_cmd(host->cmd))) { pr_err("%s: Timeout waiting for hardware interrupt.\n", @@ -3219,7 +3506,9 @@ static void sdhci_data_irq(struct sdhci_host *host, u32 intmask) host->ops->adma_workaround(host, intmask); } - if (host->data->error) + if (host->prq) + sdhci_finish_packed_data(host, host->data->error); + else if (host->data->error) sdhci_finish_data(host); else { if (intmask & (SDHCI_INT_DATA_AVAIL | SDHCI_INT_SPACE_AVAIL)) @@ -3391,6 +3680,9 @@ static irqreturn_t sdhci_irq(int irq, void *dev_id) host->mrqs_done[i] = NULL; } } + + if (host->prq) + result = IRQ_WAKE_THREAD; out: spin_unlock(&host->lock); diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h index f33830b..bbc937b 100644 --- a/drivers/mmc/host/sdhci.h +++ b/drivers/mmc/host/sdhci.h @@ -579,6 +579,7 @@ struct sdhci_host { bool v4_mode; /* Host Version 4 Enable */ bool use_external_dma; /* Host selects to use external DMA */ bool always_defer_done; /* Always defer to complete requests */ + bool adma3_enabled; /* ADMA3 mode enabled */ struct mmc_request *mrqs_done[SDHCI_MAX_MRQS]; /* Requests done */ struct mmc_command *cmd; /* Current command */ @@ -586,12 +587,15 @@ struct sdhci_host { struct mmc_data *data; /* Current data request */ unsigned int data_early:1; /* Data finished before cmd */ + struct mmc_packed_request *prq; /* Current packed request */ struct sg_mapping_iter sg_miter; /* SG state for PIO */ unsigned int blocks; /* remaining PIO blocks */ void *adma_table; /* ADMA descriptor table */ void *align_buffer; /* Bounce buffer */ void *integr_table; /* ADMA3 intergrate descriptor table */ + void *adma3_pos; /* ADMA3 buffer position */ + void *adma3_align_pos; /* ADMA3 Bounce buffer position */ size_t adma_table_sz; /* ADMA descriptor table size */ size_t align_buffer_sz; /* Bounce buffer size */ @@ -703,6 +707,8 @@ struct sdhci_ops { dma_addr_t addr, int len, unsigned int cmd); void (*request_done)(struct sdhci_host *host, struct mmc_request *mrq); + void (*packed_request_done)(struct sdhci_host *host, + struct mmc_packed_request *prq); }; #ifdef CONFIG_MMC_SDHCI_IO_ACCESSORS @@ -857,4 +863,9 @@ bool sdhci_cqe_irq(struct sdhci_host *host, u32 intmask, int *cmd_error, void sdhci_set_data_timeout_irq(struct sdhci_host *host, bool enable); void __sdhci_set_timeout(struct sdhci_host *host, struct mmc_command *cmd); +int sdhci_prepare_packed(struct mmc_host *mmc); +int sdhci_unprepare_packed(struct mmc_host *mmc); +int sdhci_packed_request(struct mmc_host *mmc, + struct mmc_packed_request *prq); + #endif /* __SDHCI_HW_H */ From patchwork Mon Mar 16 10:01:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 11440057 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B378714DD for ; Mon, 16 Mar 2020 10:02:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 89AD020738 for ; Mon, 16 Mar 2020 10:02:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="gJ0wj7QE" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730621AbgCPKCK (ORCPT ); Mon, 16 Mar 2020 06:02:10 -0400 Received: from mail-pl1-f193.google.com ([209.85.214.193]:45016 "EHLO mail-pl1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730534AbgCPKCJ (ORCPT ); Mon, 16 Mar 2020 06:02:09 -0400 Received: by mail-pl1-f193.google.com with SMTP id d9so7762499plo.11; Mon, 16 Mar 2020 03:02:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=zVsXEMQ2cvDxi1JSi+7AbphOA0QjROre8VLRlUq3vMI=; b=gJ0wj7QE4LJ/lVMZEDr7b6mqq5iHAVeEw621yxmOe5eoW5GOKHH6hEO/5GH3Zc9ptd p5G7l1lmL7KQed6+O0bxjMYWd/4Pi2LZQXD13nrJMcruN6CaXSPfYh93f16O5YVX13F5 95tCItZ5ZT7afSEMUDzpb1VwidhfVQW9sD3ViCD9G99pDFTigo5yehXgJyN7Z7jjW5cq 8AL7fT4TeWYsYPYpxc4M7xzVt90edBEyTssMbLeTrZXiuLx4KbZMCCR29m84CqXptPqO 4eUJH7W2F1YXX6ADHjipzc6a6KLFil1xyX+yDMtEvPrygtb3u8Q+d/0L6sJb3p9MQN0a nT5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=zVsXEMQ2cvDxi1JSi+7AbphOA0QjROre8VLRlUq3vMI=; b=gpSkQZfnuo8V3totvwcWULIsa7a861gG2bwA2uL+zGQOHLFw7+rcZ9jNIy13XtPsus HOBL5luhccs65BEKk6fhE87rEPGN94M1iZWiVEsEl86eCATmD5tyHMckwpDvjpkxFd/M FfnEcAaeqr3ZGk72JsU51pA+u4Jz+XDuYWoO7Cn9dUBU3sHzGnaiZzQ1aoPOo7ALdbkr 1MbToat8zTj47hrrNqATGHs8bjF55M/VQ1wHXRe2Bu1doXTsRqdCa1TM5cXLx/T5eC6M 6AknjtXacN/4wveAHVeAAp0UaQzpOIiskcdoDNRpNC/gpJqR5khHMjC+vD2YokxrAO9n 4nfw== X-Gm-Message-State: ANhLgQ2BPP5b3AGsZQrHIgXxr/rFG7LOVYYbh2EZ/4JfkROqEOmynk6E +7jJek+vqpxT/lQiTx3BHFg= X-Google-Smtp-Source: ADFU+vuwddfeOJxssCBTiAilwq4VXdcEO12xHfGzz1b4kKNTo02ClU1t1DL+tBP+ytkEmiiBga1oAQ== X-Received: by 2002:a17:902:740b:: with SMTP id g11mr12870239pll.166.1584352929118; Mon, 16 Mar 2020 03:02:09 -0700 (PDT) Received: from sh03840pcu.spreadtrum.com ([117.18.48.82]) by smtp.gmail.com with ESMTPSA id 136sm63970411pgh.26.2020.03.16.03.02.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 16 Mar 2020 03:02:08 -0700 (PDT) From: Baolin Wang To: axboe@kernel.dk, paolo.valente@linaro.org, ulf.hansson@linaro.org, adrian.hunter@intel.com Cc: ming.lei@redhat.com, arnd@arndb.de, linus.walleij@linaro.org, baolin.wang7@gmail.com, orsonzhai@gmail.com, zhang.lyra@gmail.com, linux-mmc@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RESEND RFC PATCH 8/8] mmc: host: sdhci-sprd: Add MMC packed request support Date: Mon, 16 Mar 2020 18:01:25 +0800 Message-Id: <060083eadc085cc93149475f3259cf2eb9d81d83.1584350380.git.baolin.wang7@gmail.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org Enable the ADMA3 transfer mode as well as adding packed operations to support MMC packed requests to improve IO performance. Signed-off-by: Baolin Wang --- drivers/mmc/host/sdhci-sprd.c | 30 ++++++++++++++++++++++++++++-- 1 file changed, 28 insertions(+), 2 deletions(-) diff --git a/drivers/mmc/host/sdhci-sprd.c b/drivers/mmc/host/sdhci-sprd.c index 49afe1c..daa38ed 100644 --- a/drivers/mmc/host/sdhci-sprd.c +++ b/drivers/mmc/host/sdhci-sprd.c @@ -390,6 +390,12 @@ static void sdhci_sprd_request_done(struct sdhci_host *host, mmc_request_done(host->mmc, mrq); } +static void sdhci_sprd_packed_request_done(struct sdhci_host *host, + struct mmc_packed_request *prq) +{ + mmc_hsq_finalize_packed_request(host->mmc, prq); +} + static struct sdhci_ops sdhci_sprd_ops = { .read_l = sdhci_sprd_readl, .write_l = sdhci_sprd_writel, @@ -404,6 +410,7 @@ static void sdhci_sprd_request_done(struct sdhci_host *host, .get_max_timeout_count = sdhci_sprd_get_max_timeout_count, .get_ro = sdhci_sprd_get_ro, .request_done = sdhci_sprd_request_done, + .packed_request_done = sdhci_sprd_packed_request_done, }; static void sdhci_sprd_request(struct mmc_host *mmc, struct mmc_request *mrq) @@ -546,10 +553,18 @@ static void sdhci_sprd_phy_param_parse(struct sdhci_sprd_host *sprd_host, SDHCI_QUIRK_MISSING_CAPS, .quirks2 = SDHCI_QUIRK2_BROKEN_HS200 | SDHCI_QUIRK2_USE_32BIT_BLK_CNT | - SDHCI_QUIRK2_PRESET_VALUE_BROKEN, + SDHCI_QUIRK2_PRESET_VALUE_BROKEN | + SDHCI_QUIRK2_USE_ADMA3_SUPPORT, .ops = &sdhci_sprd_ops, }; +static const struct hsq_packed_ops packed_ops = { + .packed_algo = mmc_hsq_packed_algo_rw, + .prepare_hardware = sdhci_prepare_packed, + .unprepare_hardware = sdhci_unprepare_packed, + .packed_request = sdhci_packed_request, +}; + static int sdhci_sprd_probe(struct platform_device *pdev) { struct sdhci_host *host; @@ -676,7 +691,18 @@ static int sdhci_sprd_probe(struct platform_device *pdev) goto err_cleanup_host; } - ret = mmc_hsq_init(hsq, host->mmc, NULL, 0); + /* + * If the host controller can support ADMA3 mode, we can enable the + * packed request mode to improve the read/write performance. + * + * Considering the maximum ADMA3 entries (default is 16) and the request + * latency, we set the default maximum packed requests number is 8. + */ + if (host->flags & SDHCI_USE_ADMA3) + ret = mmc_hsq_init(hsq, host->mmc, &packed_ops, + SDHCI_MAX_ADMA3_ENTRIES / 2); + else + ret = mmc_hsq_init(hsq, host->mmc, NULL, 0); if (ret) goto err_cleanup_host;