From patchwork Mon Sep 25 06:14:53 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 9969277 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 708AC60365 for ; Mon, 25 Sep 2017 06:15:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7703328BB0 for ; Mon, 25 Sep 2017 06:15:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6C04B28BDF; Mon, 25 Sep 2017 06:15:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1975528BB0 for ; Mon, 25 Sep 2017 06:15:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932309AbdIYGPR (ORCPT ); Mon, 25 Sep 2017 02:15:17 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:20296 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932464AbdIYGPM (ORCPT ); Mon, 25 Sep 2017 02:15:12 -0400 X-IronPort-AV: E=Sophos;i="5.42,435,1500912000"; d="scan'208";a="53984177" Received: from sjappemgw11.hgst.com (HELO sjappemgw12.hgst.com) ([199.255.44.62]) by ob1.hgst.iphmx.com with ESMTP; 25 Sep 2017 14:15:12 +0800 Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by sjappemgw12.hgst.com with ESMTP; 24 Sep 2017 23:15:11 -0700 From: Damien Le Moal To: linux-scsi@vger.kernel.org, "Martin K . Petersen" , linux-block@vger.kernel.org, Jens Axboe Cc: Christoph Hellwig , Bart Van Assche Subject: [PATCH V5 13/14] block: mq-deadline: Limit write request dispatch for zoned block devices Date: Mon, 25 Sep 2017 15:14:53 +0900 Message-Id: <20170925061454.5533-14-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20170925061454.5533-1-damien.lemoal@wdc.com> References: <20170925061454.5533-1-damien.lemoal@wdc.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When dispatching writes to a zoned block device, only allow the request to be dispatched if its target zone is not locked. If it is, leave the request in the scheduler queue and look for another suitable write request. If no write can be dispatched, allow reads to be dispatched even if the write batch is not done. Signed-off-by: Damien Le Moal --- block/mq-deadline.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 58 insertions(+), 4 deletions(-) diff --git a/block/mq-deadline.c b/block/mq-deadline.c index 186c32099845..fc3e50a0a495 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -282,19 +282,47 @@ static inline int deadline_check_fifo(struct deadline_data *dd, int ddir) } /* + * Test if a request can be dispatched. + */ +static inline bool deadline_can_dispatch_request(struct deadline_data *dd, + struct request *rq) +{ + if (!deadline_request_needs_zone_wlock(dd, rq)) + return true; + return !deadline_zone_is_wlocked(dd, rq); +} + +/* * For the specified data direction, return the next request to * dispatch using arrival ordered lists. */ static struct request * deadline_fifo_request(struct deadline_data *dd, int data_dir) { + struct request *rq; + unsigned long flags; + if (WARN_ON_ONCE(data_dir != READ && data_dir != WRITE)) return NULL; if (list_empty(&dd->fifo_list[data_dir])) return NULL; - return rq_entry_fifo(dd->fifo_list[data_dir].next); + if (!dd->zones_wlock || data_dir == READ) + return rq_entry_fifo(dd->fifo_list[data_dir].next); + + spin_lock_irqsave(&dd->zone_lock, flags); + + list_for_each_entry(rq, &dd->fifo_list[WRITE], queuelist) { + if (deadline_can_dispatch_request(dd, rq)) + goto out; + } + rq = NULL; + +out: + spin_unlock_irqrestore(&dd->zone_lock, flags); + + return rq; } /* @@ -304,10 +332,25 @@ deadline_fifo_request(struct deadline_data *dd, int data_dir) static struct request * deadline_next_request(struct deadline_data *dd, int data_dir) { + struct request *rq; + unsigned long flags; + if (WARN_ON_ONCE(data_dir != READ && data_dir != WRITE)) return NULL; - return dd->next_rq[data_dir]; + rq = dd->next_rq[data_dir]; + if (!dd->zones_wlock || data_dir == READ) + return rq; + + spin_lock_irqsave(&dd->zone_lock, flags); + while (rq) { + if (deadline_can_dispatch_request(dd, rq)) + break; + rq = deadline_latter_request(rq); + } + spin_unlock_irqrestore(&dd->zone_lock, flags); + + return rq; } /* @@ -349,7 +392,8 @@ static struct request *__dd_dispatch_request(struct blk_mq_hw_ctx *hctx) if (reads) { BUG_ON(RB_EMPTY_ROOT(&dd->sort_list[READ])); - if (writes && (dd->starved++ >= dd->writes_starved)) + if (deadline_fifo_request(dd, WRITE) && + (dd->starved++ >= dd->writes_starved)) goto dispatch_writes; data_dir = READ; @@ -394,6 +438,13 @@ static struct request *__dd_dispatch_request(struct blk_mq_hw_ctx *hctx) rq = next_rq; } + /* + * If we only have writes queued and none of them can be dispatched, + * rq will be NULL. + */ + if (!rq) + return NULL; + dd->batching = 0; dispatch_request: @@ -560,7 +611,7 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, /* * This may be a requeue of a request that has locked its - * target zone. If this is the case, release the request zone lock. + * target zone. If this is the case, release the zone lock. */ if (deadline_request_has_zone_wlock(rq)) deadline_wunlock_zone(dd, rq); @@ -570,6 +621,9 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, blk_mq_sched_request_inserted(rq); + if (at_head && deadline_request_needs_zone_wlock(dd, rq)) + pr_info("######## Write at head !\n"); + if (at_head || blk_rq_is_passthrough(rq)) { if (at_head) list_add(&rq->queuelist, &dd->dispatch);