From patchwork Wed Jan 18 15:39:19 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hannes Reinecke X-Patchwork-Id: 9524107 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id BC621601B7 for ; Wed, 18 Jan 2017 15:39:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B023528547 for ; Wed, 18 Jan 2017 15:39:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A48CE285EF; Wed, 18 Jan 2017 15:39:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 31C9428547 for ; Wed, 18 Jan 2017 15:39:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752488AbdARPjY (ORCPT ); Wed, 18 Jan 2017 10:39:24 -0500 Received: from mx2.suse.de ([195.135.220.15]:57062 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752487AbdARPjX (ORCPT ); Wed, 18 Jan 2017 10:39:23 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id E8AF0AC3D; Wed, 18 Jan 2017 15:39:20 +0000 (UTC) Subject: Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers To: Johannes Thumshirn , Sagi Grimberg References: <8b47ca34-d2ff-26dc-721e-2cb1e18f1efc@grimberg.me> <499af528-7810-f82d-1f11-cbf8f3a5b21c@grimberg.me> <53a3fb6c-c75a-519a-f669-2bcab404e01d@grimberg.me> <20170117162752.GE6067@linux-x5ow.site> <6df6bf6a-7cd3-1700-2b0a-e140325ebf47@grimberg.me> <20170118135156.GG3514@linux-x5ow.site> <20170118145816.GI3514@linux-x5ow.site> <20170118151643.GJ3514@linux-x5ow.site> Cc: Jens Axboe , Christoph Hellwig , Linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, Keith Busch , "lsf-pc@lists.linux-foundation.org" From: Hannes Reinecke Message-ID: Date: Wed, 18 Jan 2017 16:39:19 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.6.0 MIME-Version: 1.0 In-Reply-To: <20170118151643.GJ3514@linux-x5ow.site> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On 01/18/2017 04:16 PM, Johannes Thumshirn wrote: > On Wed, Jan 18, 2017 at 05:14:36PM +0200, Sagi Grimberg wrote: >> >>> Hannes just spotted this: >>> static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx, >>> const struct blk_mq_queue_data *bd) >>> { >>> [...] >>> __nvme_submit_cmd(nvmeq, &cmnd); >>> nvme_process_cq(nvmeq); >>> spin_unlock_irq(&nvmeq->q_lock); >>> return BLK_MQ_RQ_QUEUE_OK; >>> out_cleanup_iod: >>> nvme_free_iod(dev, req); >>> out_free_cmd: >>> nvme_cleanup_cmd(req); >>> return ret; >>> } >>> >>> So we're draining the CQ on submit. This of cause makes polling for >>> completions in the IRQ handler rather pointless as we already did in the >>> submission path. >> >> I think you missed: >> http://git.infradead.org/nvme.git/commit/49c91e3e09dc3c9dd1718df85112a8cce3ab7007 > > I indeed did, thanks. > But it doesn't help. We're still having to wait for the first interrupt, and if we're really fast that's the only completion we have to process. Try this: That should avoid the first interrupt, and with a bit of lock reduce the number of interrupts _drastically_. Cheers, Hannes diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index b4b32e6..e2dd9e2 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -623,6 +623,8 @@ static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx, } __nvme_submit_cmd(nvmeq, &cmnd); spin_unlock(&nvmeq->sq_lock); + disable_irq_nosync(nvmeq_irq(irq)); + irq_poll_sched(&nvmeq->iop); return BLK_MQ_RQ_QUEUE_OK; out_cleanup_iod: nvme_free_iod(dev, req);