From patchwork Thu Apr 13 15:56:16 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Gurtovoy X-Patchwork-Id: 9679675 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5695960383 for ; Thu, 13 Apr 2017 15:56:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 473D028658 for ; Thu, 13 Apr 2017 15:56:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3A6EE28676; Thu, 13 Apr 2017 15:56:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9738728658 for ; Thu, 13 Apr 2017 15:56:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751324AbdDMP40 (ORCPT ); Thu, 13 Apr 2017 11:56:26 -0400 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:33367 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751576AbdDMP4Z (ORCPT ); Thu, 13 Apr 2017 11:56:25 -0400 Received: from Internal Mail-Server by MTLPINE1 (envelope-from maxg@mellanox.com) with ESMTPS (AES256-SHA encrypted); 13 Apr 2017 18:56:18 +0300 Received: from r-vnc08.mtr.labs.mlnx (r-vnc08.mtr.labs.mlnx [10.208.0.121]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id v3DFuHow028607; Thu, 13 Apr 2017 18:56:18 +0300 From: Max Gurtovoy To: linux-nvme@lists.infradead.org, keith.busch@intel.com, hch@lst.de, linux-rdma@vger.kernel.org, leon@kernel.org Cc: vladimirk@mellanox.com, maxg@mellanox.com Subject: [PATCH v2 3/4] nvme: enable SG gaps support Date: Thu, 13 Apr 2017 18:56:16 +0300 Message-Id: <1492098977-5231-4-git-send-email-maxg@mellanox.com> X-Mailer: git-send-email 1.7.8.2 In-Reply-To: <1492098977-5231-1-git-send-email-maxg@mellanox.com> References: <1492098977-5231-1-git-send-email-maxg@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP For controllers that can handle arbitrarily sized bios (e.g advanced RDMA ctrls) we can allow the block layer to pass us gaps by skip setting the queue virt_boundary. Signed-off-by: Max Gurtovoy Reviewed-by: Keith Busch Reviewed-by: Christoph Hellwig Reviewed-by: Leon Romanovsky --- drivers/nvme/host/core.c | 5 ++++- drivers/nvme/host/nvme.h | 1 + 2 files changed, 5 insertions(+), 1 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 9583a5f..72bc70e 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -1252,7 +1252,10 @@ static void nvme_set_queue_limits(struct nvme_ctrl *ctrl, } if (ctrl->quirks & NVME_QUIRK_STRIPE_SIZE) blk_queue_chunk_sectors(q, ctrl->max_hw_sectors); - blk_queue_virt_boundary(q, ctrl->page_size - 1); + + if (!ctrl->sg_gaps_support) + blk_queue_virt_boundary(q, ctrl->page_size - 1); + if (ctrl->vwc & NVME_CTRL_VWC_PRESENT) vwc = true; blk_queue_write_cache(q, vwc, vwc); diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 2aa20e3..ccb895a 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -162,6 +162,7 @@ struct nvme_ctrl { struct work_struct scan_work; struct work_struct async_event_work; struct delayed_work ka_work; + bool sg_gaps_support; /* Power saving configuration */ u64 ps_max_latency_us;