From patchwork Mon Apr 10 15:12:34 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marta Rybczynska X-Patchwork-Id: 9672817 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 69130600CB for ; Mon, 10 Apr 2017 15:13:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6E7FF28470 for ; Mon, 10 Apr 2017 15:13:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6309028479; Mon, 10 Apr 2017 15:13:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8837C28470 for ; Mon, 10 Apr 2017 15:13:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753334AbdDJPNc (ORCPT ); Mon, 10 Apr 2017 11:13:32 -0400 Received: from zimbra1.kalray.eu ([92.103.151.219]:40618 "EHLO zimbra1.kalray.eu" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753664AbdDJPMg (ORCPT ); Mon, 10 Apr 2017 11:12:36 -0400 Received: from localhost (localhost [127.0.0.1]) by zimbra1.kalray.eu (Postfix) with ESMTP id EAF1928071D; Mon, 10 Apr 2017 17:12:34 +0200 (CEST) Received: from zimbra1.kalray.eu ([127.0.0.1]) by localhost (zimbra1.kalray.eu [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id lWKxsrt2FQO9; Mon, 10 Apr 2017 17:12:34 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by zimbra1.kalray.eu (Postfix) with ESMTP id 7E146281399; Mon, 10 Apr 2017 17:12:34 +0200 (CEST) DKIM-Filter: OpenDKIM Filter v2.9.2 zimbra1.kalray.eu 7E146281399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kalray.eu; s=32AE1B44-9502-11E5-BA35-3734643DEF29; t=1491837154; bh=AcjADbVlfpEKHdGxTMWviDM8t3e6X6AfFYqbfxCkqas=; h=Date:From:To:Message-ID:Subject:MIME-Version:Content-Type: Content-Transfer-Encoding; b=pMiawCoeTkQU9OBbjOTQIWKNbAhsOLL51/r24xH2uNsrWVhUGusxF1NWyUn5XFTZn wBoiwd02zDhnaeOWuxor0+Huwm+nP7wgryPdMZ1NgjVIPyjdEgx7Rll51TXx0I5oIf 0y/CU+lcgJP6bEMva8nwhYLcQMyJ3sWEXKKy1T7w= X-Virus-Scanned: amavisd-new at kalray.eu Received: from zimbra1.kalray.eu ([127.0.0.1]) by localhost (zimbra1.kalray.eu [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id 12-aolgfR2Lv; Mon, 10 Apr 2017 17:12:34 +0200 (CEST) Received: from zimbra1.kalray.eu (localhost [127.0.0.1]) by zimbra1.kalray.eu (Postfix) with ESMTP id 65A32280C29; Mon, 10 Apr 2017 17:12:34 +0200 (CEST) Date: Mon, 10 Apr 2017 17:12:34 +0200 (CEST) From: Marta Rybczynska To: Leon Romanovsky , Doug Ledford , Jason Gunthorpe , Christoph Hellwig , linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, keith busch , axboe@fb.com, Max Gurtovoy Cc: Samuel Jones Message-ID: <1519881025.363156294.1491837154312.JavaMail.zimbra@kalray.eu> Subject: [PATCH v2] nvme-rdma: support devices with queue size < 32 MIME-Version: 1.0 X-Originating-IP: [192.168.37.210] X-Mailer: Zimbra 8.6.0_GA_1182 (ZimbraWebClient - FF45 (Linux)/8.6.0_GA_1182) Thread-Topic: nvme-rdma: support devices with queue size < 32 Thread-Index: MO8QN5RXiksiPEld16/iGBj06KERGA== Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the case of small NVMe-oF queue size (<32) we may enter a deadlock caused by the fact that the IB completions aren't sent waiting for 32 and the send queue will fill up. The error is seen as (using mlx5): [ 2048.693355] mlx5_0:mlx5_ib_post_send:3765:(pid 7273): [ 2048.693360] nvme nvme1: nvme_rdma_post_send failed with error code -12 This patch changes the way the signaling is done so that it depends on the queue depth now. The magic define has been removed completely. Signed-off-by: Marta Rybczynska Signed-off-by: Samuel Jones Reviewed-by: Christoph Hellwig --- Changes from v1: * signal by queue size/2, remove hardcoded 32 * support queue depth of 1 drivers/nvme/host/rdma.c | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-) diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 47a479f..4de1b92 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -1029,6 +1029,18 @@ static void nvme_rdma_send_done(struct ib_cq *cq, struct ib_wc *wc) nvme_rdma_wr_error(cq, wc, "SEND"); } +static inline nvme_rdma_queue_sig_limit(struct nvme_rdma_queue *queue) +{ + int sig_limit; + + /* We signal completion every queue depth/2 and also + * handle the case of possible device with queue_depth=1, + * where we would need to signal every message. + */ + sig_limit = max(queue->queue_size / 2, 1); + return (++queue->sig_count % sig_limit) == 0; +} + static int nvme_rdma_post_send(struct nvme_rdma_queue *queue, struct nvme_rdma_qe *qe, struct ib_sge *sge, u32 num_sge, struct ib_send_wr *first, bool flush) @@ -1056,9 +1068,6 @@ static int nvme_rdma_post_send(struct nvme_rdma_queue *queue, * Would have been way to obvious to handle this in hardware or * at least the RDMA stack.. * - * This messy and racy code sniplet is copy and pasted from the iSER - * initiator, and the magic '32' comes from there as well. - * * Always signal the flushes. The magic request used for the flush * sequencer is not allocated in our driver's tagset and it's * triggered to be freed by blk_cleanup_queue(). So we need to @@ -1066,7 +1075,7 @@ static int nvme_rdma_post_send(struct nvme_rdma_queue *queue, * embedded in request's payload, is not freed when __ib_process_cq() * calls wr_cqe->done(). */ - if ((++queue->sig_count % 32) == 0 || flush) + if (nvme_rdma_queue_sig_limit(queue) || flush) wr.send_flags |= IB_SEND_SIGNALED; if (first)