From patchwork Wed Nov 13 15:20:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13873899 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 09AD241C92; Wed, 13 Nov 2024 15:21:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731511264; cv=none; b=e/bSyFUJX2g5rOw1jUKyJm2w36VTmG9DSWgtF9Fhci23tovOiRBDNhUjeD29jh/rwr0lshPJWfBM2CgHW4VtlZ0wfG/zYFS0puOcLVmLy7+anDd7NlDtsrqcljzPfAgOQuMV6x9wjHjUNyfxhf672+JLQ9Ivg3gW+R5OFQ5QG6I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731511264; c=relaxed/simple; bh=/AyNE87zNa/BwSBQ7Szn8a/pBsrqI71JKUgPz/jiZL8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QxM4tmPk2rhV9m4tzCUDz20eiuHhaCzLogV+52lw6juQ4oClMnN8t294QoQ3+fb7fWvCyeqnctYmCx3AvPJ5h9Rm5z/M6jrSST4/BiGCD9qvEsfCdVY8Sy7ABLsUACHQzjJKXRomKJO/NOtfxRWRi9lUL+YnM+f8mEjVntaR/0c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=PhGDx1tn; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="PhGDx1tn" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=RU60UgvAWZdtjyyk0hAKOkebdmlSvPmCWHzttyHTGIw=; b=PhGDx1tnC48o9iiIOi/W7FJL79 RZILL3KXmxOZ9H5xBGIbDmpsP7ly0wtf03Cl6xkHIfA6RbQUOgcIkMY8hRMQqu+H06CRjhJ9imRol ZAtm2cmz2rcQ9dcoou0dTtDlDydgJ+6pMA12XuP5VTOigXbXLCivmWFXgS6B09xrQWGJznkOMG8ex 18UO/pfFDv+KtPeVgbDaIZUTcLK7HmxQ9rbhm+SI0ICHGNI7YFwEVyqfk5YsyG68DP2cnNIX7KUoP QMQM9Fe3dCzL+1W2RvPqt17+F1lmdKwZEwzeWV/+TvO+jLZkgTuN7NqgaMQm6PwA9D5lFjBcNOArN yxu8Swfg==; Received: from 2a02-8389-2341-5b80-9e61-c6cf-2f07-a796.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:9e61:c6cf:2f07:a796] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.98 #2 (Red Hat Linux)) id 1tBFAx-00000007HKx-3UbT; Wed, 13 Nov 2024 15:21:00 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Michael S. Tsirkin" , Jason Wang , Keith Busch , Sagi Grimberg , Pavel Begunkov , linux-block@vger.kernel.org, virtualization@lists.linux.dev, linux-nvme@lists.infradead.org, io-uring@vger.kernel.org Subject: [PATCH 2/6] virtio_blk: reverse request order in virtio_queue_rqs Date: Wed, 13 Nov 2024 16:20:42 +0100 Message-ID: <20241113152050.157179-3-hch@lst.de> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241113152050.157179-1-hch@lst.de> References: <20241113152050.157179-1-hch@lst.de> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html blk_mq_flush_plug_list submits requests in the reverse order that they were submitted, which leads to a rather suboptimal I/O pattern especially in rotational devices. Fix this by rewriting nvme_queue_rqs so that it always pops the requests from the passed in request list, and then adds them to the head of a local submit list. This actually simplifies the code a bit as it removes the complicated list splicing, at the cost of extra updates of the rq_next pointer. As that should be cache hot anyway it should be an easy price to pay. Fixes: 0e9911fa768f ("virtio-blk: support mq_ops->queue_rqs()") Signed-off-by: Christoph Hellwig Acked-by: Michael S. Tsirkin --- drivers/block/virtio_blk.c | 46 +++++++++++++++++--------------------- 1 file changed, 21 insertions(+), 25 deletions(-) diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 0e99a4714928..b25f7c06a28e 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -471,18 +471,18 @@ static bool virtblk_prep_rq_batch(struct request *req) return virtblk_prep_rq(req->mq_hctx, vblk, req, vbr) == BLK_STS_OK; } -static bool virtblk_add_req_batch(struct virtio_blk_vq *vq, +static void virtblk_add_req_batch(struct virtio_blk_vq *vq, struct request **rqlist) { + struct request *req; unsigned long flags; - int err; bool kick; spin_lock_irqsave(&vq->lock, flags); - while (!rq_list_empty(*rqlist)) { - struct request *req = rq_list_pop(rqlist); + while ((req = rq_list_pop(rqlist))) { struct virtblk_req *vbr = blk_mq_rq_to_pdu(req); + int err; err = virtblk_add_req(vq->vq, vbr); if (err) { @@ -495,37 +495,33 @@ static bool virtblk_add_req_batch(struct virtio_blk_vq *vq, kick = virtqueue_kick_prepare(vq->vq); spin_unlock_irqrestore(&vq->lock, flags); - return kick; + if (kick) + virtqueue_notify(vq->vq); } static void virtio_queue_rqs(struct request **rqlist) { - struct request *req, *next, *prev = NULL; + struct request *submit_list = NULL; struct request *requeue_list = NULL; + struct request **requeue_lastp = &requeue_list; + struct virtio_blk_vq *vq = NULL; + struct request *req; - rq_list_for_each_safe(rqlist, req, next) { - struct virtio_blk_vq *vq = get_virtio_blk_vq(req->mq_hctx); - bool kick; - - if (!virtblk_prep_rq_batch(req)) { - rq_list_move(rqlist, &requeue_list, req, prev); - req = prev; - if (!req) - continue; - } + while ((req = rq_list_pop(rqlist))) { + struct virtio_blk_vq *this_vq = get_virtio_blk_vq(req->mq_hctx); - if (!next || req->mq_hctx != next->mq_hctx) { - req->rq_next = NULL; - kick = virtblk_add_req_batch(vq, rqlist); - if (kick) - virtqueue_notify(vq->vq); + if (vq && vq != this_vq) + virtblk_add_req_batch(vq, &submit_list); + vq = this_vq; - *rqlist = next; - prev = NULL; - } else - prev = req; + if (virtblk_prep_rq_batch(req)) + rq_list_add(&submit_list, req); /* reverse order */ + else + rq_list_add_tail(&requeue_lastp, req); } + if (vq) + virtblk_add_req_batch(vq, &submit_list); *rqlist = requeue_list; }