From patchwork Sun Apr 19 18:19:01 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Krivenok X-Patchwork-Id: 6238741 Return-Path: X-Original-To: patchwork-linux-fsdevel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id E93AC9F1C4 for ; Sun, 19 Apr 2015 18:19:09 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id F3CF42034B for ; Sun, 19 Apr 2015 18:19:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0CD0B20340 for ; Sun, 19 Apr 2015 18:19:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752185AbbDSSTD (ORCPT ); Sun, 19 Apr 2015 14:19:03 -0400 Received: from mail-ob0-f182.google.com ([209.85.214.182]:35494 "EHLO mail-ob0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751309AbbDSSTB (ORCPT ); Sun, 19 Apr 2015 14:19:01 -0400 Received: by obbfy7 with SMTP id fy7so101998020obb.2 for ; Sun, 19 Apr 2015 11:19:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=vr5hJ7BPBzQ9oqs805/pitLpg/tLp6zq5w3LSlS3qW0=; b=AuScE7/c1xUEITzaukBKPZXI07XqWLklu+Iasp3GyTtmoQLZk9Z4vNK25kEDP+1MoB Y0RZDVVBTSG2LBSSUtGA26fajiMwIp0mlFCSGmucXV68A+Aye8KIXXzvUuGtsaK4lm33 sxPfG0BEwJMVxuunfVJgdvuMrmDSoSX+pfovymzck0RQ+EGSOWYqVhN4xG+FxbY2zzPe z/epkA5s97ZCXcg/RnJtzj90TbByj/fMJgUxnLHS4/J6QkiMhw9X4z1OgpT/a5hBqycb e6SE7QVcophiiq7kxxLVKLx/3zfFZzoOnXdHV9SpVLWeB2XRO+vu88213NCRJhDOe3A0 Zi7A== MIME-Version: 1.0 X-Received: by 10.202.78.66 with SMTP id c63mr10691791oib.0.1429467541338; Sun, 19 Apr 2015 11:19:01 -0700 (PDT) Received: by 10.202.205.151 with HTTP; Sun, 19 Apr 2015 11:19:01 -0700 (PDT) Date: Sun, 19 Apr 2015 21:19:01 +0300 Message-ID: Subject: [PATCH 1/1] null_blk: fix handling of BLKPREP_DEFER case From: Dmitry Krivenok To: linux-fsdevel@vger.kernel.org Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When we fail to allocate new cmd in null_rq_prep_fn we return BLKPREP_DEFER which is not handled properly. In single-queue mode of null_blk the following command hangs forever in io_schedule(): $ dd if=/dev/nullb0 of=/dev/null bs=8M count=5000 iflag=direct The reason is that when 64 commands have been allocated, the 65th command allocation will fail due to missing free tag. The request, however, will be kept in the queue which will never be started again (unless you run another command that does I/O to /dev/nullb0). This small patch tries to solve the issue by stopping the queue when we detect that all tags were exhausted and starting it again when we free the tag. I've verified that the command mentioned above doesn't hang anymore and also made sure that null_blk with my change survives fio-based stress tests. Signed-off-by: Dmitry V. Krivenok --- drivers/block/null_blk.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) @@ -341,6 +352,7 @@ static int null_rq_prep_fn(struct request_queue *q, struct request *req) static void null_request_fn(struct request_queue *q) { struct request *rq; + struct nullb_queue *nq = nullb_to_queue(q->queuedata); while ((rq = blk_fetch_request(q)) != NULL) { struct nullb_cmd *cmd = rq->special; @@ -349,6 +361,9 @@ static void null_request_fn(struct request_queue *q) null_handle_cmd(cmd); spin_lock_irq(q->queue_lock); } + if(nq->no_cmds) { + blk_stop_queue(q); + } } static int null_queue_rq(struct blk_mq_hw_ctx *hctx, @@ -430,6 +445,7 @@ static int setup_commands(struct nullb_queue *nq) if (!nq->cmds) return -ENOMEM; + nq->no_cmds = false; tag_size = ALIGN(nq->queue_depth, BITS_PER_LONG) / BITS_PER_LONG; nq->tag_map = kzalloc(tag_size * sizeof(unsigned long), GFP_KERNEL); if (!nq->tag_map) { diff --git a/drivers/block/null_blk.c b/drivers/block/null_blk.c index 65cd61a..4ac684b 100644 --- a/drivers/block/null_blk.c +++ b/drivers/block/null_blk.c @@ -25,6 +25,7 @@ struct nullb_queue { unsigned int queue_depth; struct nullb_cmd *cmds; + bool no_cmds; }; struct nullb { @@ -171,6 +172,13 @@ static unsigned int get_tag(struct nullb_queue *nq) static void free_cmd(struct nullb_cmd *cmd) { put_tag(cmd->nq, cmd->tag); + if(cmd->nq->no_cmds) { + unsigned long flags; + cmd->nq->no_cmds = false; + spin_lock_irqsave(cmd->rq->q->queue_lock, flags); + blk_start_queue(cmd->rq->q); + spin_unlock_irqrestore(cmd->rq->q->queue_lock, flags); + } } static struct nullb_cmd *__alloc_cmd(struct nullb_queue *nq) @@ -195,6 +203,9 @@ static struct nullb_cmd *alloc_cmd(struct nullb_queue *nq, int can_wait) DEFINE_WAIT(wait); cmd = __alloc_cmd(nq); + if (!cmd && !can_wait) { + nq->no_cmds = true; + } if (cmd || !can_wait) return cmd;