From patchwork Mon Apr 18 21:06:28 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 8875011 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id E4A219F39D for ; Mon, 18 Apr 2016 21:06:50 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 1BE6120117 for ; Mon, 18 Apr 2016 21:06:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1807A2010B for ; Mon, 18 Apr 2016 21:06:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751956AbcDRVGl (ORCPT ); Mon, 18 Apr 2016 17:06:41 -0400 Received: from bombadil.infradead.org ([198.137.202.9]:51371 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751405AbcDRVGa (ORCPT ); Mon, 18 Apr 2016 17:06:30 -0400 Received: from [74.203.127.5] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.80.1 #2 (Red Hat Linux)) id 1asGNJ-0000dP-8H for linux-rdma@vger.kernel.org; Mon, 18 Apr 2016 21:06:29 +0000 From: Christoph Hellwig To: linux-rdma@vger.kernel.org Subject: [PATCH] IB/iser: fix max_sectors calculation Date: Mon, 18 Apr 2016 17:06:28 -0400 Message-Id: <1461013588-8825-1-git-send-email-hch@lst.de> X-Mailer: git-send-email 2.1.4 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-7.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP iSER currently has a couple places that set max_sectors in either the host template or SCSI host, and all of them get it wrong. This patch instead uses a single assignment that (hopefully) gets it right: the max_sectors value must be derived from the number of segments in the FR or FRM structure, but actually be one lower than the page size multiplied by the number of sectors, as it has to handle the case of non-aligned I/O. Without this I get trivivial to reproduce hangs when running xfstests (on XFS) over iSER to Linux targets. Signed-off-by: Christoph Hellwig Acked-by: Sagi Grimberg Reviewed-by: Max Gurtovoy --- drivers/infiniband/ulp/iser/iscsi_iser.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/drivers/infiniband/ulp/iser/iscsi_iser.c b/drivers/infiniband/ulp/iser/iscsi_iser.c index 80b6bed..64b3d11 100644 --- a/drivers/infiniband/ulp/iser/iscsi_iser.c +++ b/drivers/infiniband/ulp/iser/iscsi_iser.c @@ -612,6 +612,7 @@ iscsi_iser_session_create(struct iscsi_endpoint *ep, struct Scsi_Host *shost; struct iser_conn *iser_conn = NULL; struct ib_conn *ib_conn; + u32 max_fr_sectors; u16 max_cmds; shost = iscsi_host_alloc(&iscsi_iser_sht, 0, 0); @@ -632,7 +633,6 @@ iscsi_iser_session_create(struct iscsi_endpoint *ep, iser_conn = ep->dd_data; max_cmds = iser_conn->max_cmds; shost->sg_tablesize = iser_conn->scsi_sg_tablesize; - shost->max_sectors = iser_conn->scsi_max_sectors; mutex_lock(&iser_conn->state_mutex); if (iser_conn->state != ISER_CONN_UP) { @@ -657,8 +657,6 @@ iscsi_iser_session_create(struct iscsi_endpoint *ep, */ shost->sg_tablesize = min_t(unsigned short, shost->sg_tablesize, ib_conn->device->ib_device->attrs.max_fast_reg_page_list_len); - shost->max_sectors = min_t(unsigned int, - 1024, (shost->sg_tablesize * PAGE_SIZE) >> 9); if (iscsi_host_add(shost, ib_conn->device->ib_device->dma_device)) { @@ -672,6 +670,15 @@ iscsi_iser_session_create(struct iscsi_endpoint *ep, goto free_host; } + /* + * FRs or FMRs can only map up to a (device) page per entry, but if the + * first entry is misaligned we'll end up using using two entries + * (head and tail) for a single page worth data, so we have to drop + * one segment from the calculation. + */ + max_fr_sectors = ((shost->sg_tablesize - 1) * PAGE_SIZE) >> 9; + shost->max_sectors = min(iser_max_sectors, max_fr_sectors); + if (cmds_max > max_cmds) { iser_info("cmds_max changed from %u to %u\n", cmds_max, max_cmds); @@ -989,7 +996,6 @@ static struct scsi_host_template iscsi_iser_sht = { .queuecommand = iscsi_queuecommand, .change_queue_depth = scsi_change_queue_depth, .sg_tablesize = ISCSI_ISER_DEF_SG_TABLESIZE, - .max_sectors = ISER_DEF_MAX_SECTORS, .cmd_per_lun = ISER_DEF_CMD_PER_LUN, .eh_abort_handler = iscsi_eh_abort, .eh_device_reset_handler= iscsi_eh_device_reset,