From patchwork Fri Nov 6 17:20:22 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 7570201 X-Patchwork-Delegate: axboe@kernel.dk Return-Path: X-Original-To: patchwork-linux-block@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id A12A89F2F7 for ; Fri, 6 Nov 2015 17:22:30 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id A2D0420738 for ; Fri, 6 Nov 2015 17:22:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 962AD205B4 for ; Fri, 6 Nov 2015 17:22:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933033AbbKFRVj (ORCPT ); Fri, 6 Nov 2015 12:21:39 -0500 Received: from mx0a-00082601.pphosted.com ([67.231.145.42]:43902 "EHLO mx0a-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1161793AbbKFRVC (ORCPT ); Fri, 6 Nov 2015 12:21:02 -0500 Received: from pps.filterd (m0044012.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.15.0.59/8.15.0.59) with SMTP id tA6HJog2022984; Fri, 6 Nov 2015 09:20:32 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=facebook; bh=HugzO/YKKeV+6NaFY+jzF9k6TMyAtNWGMjJlEINQZTo=; b=OmJL7QbyzLQeIzJCDxztBBtjJEN9qeqyxr8GjGq86HKW5JYjlUiMqZXQ95jH4lfgOzcD OtmbR+D2LBRX8+3gksiED4DX7WQzQXKMI73WSh3YnS4w5odjUAEI1TXHx0PoAydbIECF bkICLvEWG9KscLd40t5KLLEozTFrv0Ha2C4= Received: from mail.thefacebook.com ([199.201.64.23]) by mx0a-00082601.pphosted.com with ESMTP id 1y0s211g3e-1 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NOT); Fri, 06 Nov 2015 09:20:32 -0800 Received: from localhost.localdomain (192.168.54.13) by mail.thefacebook.com (192.168.16.15) with Microsoft SMTP Server (TLS) id 14.3.248.2; Fri, 6 Nov 2015 09:20:30 -0800 From: Jens Axboe To: , CC: , , Jens Axboe Subject: [PATCH 4/5] NVMe: add blk polling support Date: Fri, 6 Nov 2015 10:20:22 -0700 Message-ID: <1446830423-25027-5-git-send-email-axboe@fb.com> X-Mailer: git-send-email 2.4.1.168.g1ea28e1 In-Reply-To: <1446830423-25027-1-git-send-email-axboe@fb.com> References: <1446830423-25027-1-git-send-email-axboe@fb.com> MIME-Version: 1.0 X-Originating-IP: [192.168.54.13] X-Proofpoint-Spam-Reason: safe X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2015-11-06_11:, , signatures=0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add nvme_poll(), which will check a specific completion queue for command completions. Wire that up to the new block layer poll mechanism. Later on we'll setup specific sq/cq pairs that don't have interrups enabled, so we can do more efficient polling. As of this patch, an IRQ will still trigger on command completion. Signed-off-by: Jens Axboe Signed-off-by: Christoph Hellwig --- drivers/nvme/host/pci.c | 32 ++++++++++++++++++++++++++++---- 1 file changed, 28 insertions(+), 4 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index e878590e71b6..4a715f49f5db 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -90,7 +90,7 @@ static struct class *nvme_class; static int __nvme_reset(struct nvme_dev *dev); static int nvme_reset(struct nvme_dev *dev); -static int nvme_process_cq(struct nvme_queue *nvmeq); +static void nvme_process_cq(struct nvme_queue *nvmeq); static void nvme_dead_ctrl(struct nvme_dev *dev); struct async_cmd_info { @@ -935,7 +935,7 @@ static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx, return BLK_MQ_RQ_QUEUE_BUSY; } -static int nvme_process_cq(struct nvme_queue *nvmeq) +static void __nvme_process_cq(struct nvme_queue *nvmeq, unsigned int *tag) { u16 head, phase; @@ -953,6 +953,8 @@ static int nvme_process_cq(struct nvme_queue *nvmeq) head = 0; phase = !phase; } + if (tag && *tag == cqe.command_id) + *tag = -1; ctx = nvme_finish_cmd(nvmeq, cqe.command_id, &fn); fn(nvmeq, ctx, &cqe); } @@ -964,14 +966,18 @@ static int nvme_process_cq(struct nvme_queue *nvmeq) * a big problem. */ if (head == nvmeq->cq_head && phase == nvmeq->cq_phase) - return 0; + return; writel(head, nvmeq->q_db + nvmeq->dev->db_stride); nvmeq->cq_head = head; nvmeq->cq_phase = phase; nvmeq->cqe_seen = 1; - return 1; +} + +static void nvme_process_cq(struct nvme_queue *nvmeq) +{ + __nvme_process_cq(nvmeq, NULL); } static irqreturn_t nvme_irq(int irq, void *data) @@ -995,6 +1001,23 @@ static irqreturn_t nvme_irq_check(int irq, void *data) return IRQ_WAKE_THREAD; } +static int nvme_poll(struct blk_mq_hw_ctx *hctx, unsigned int tag) +{ + struct nvme_queue *nvmeq = hctx->driver_data; + + if ((le16_to_cpu(nvmeq->cqes[nvmeq->cq_head].status) & 1) == + nvmeq->cq_phase) { + spin_lock_irq(&nvmeq->q_lock); + __nvme_process_cq(nvmeq, &tag); + spin_unlock_irq(&nvmeq->q_lock); + + if (tag == -1) + return 1; + } + + return 0; +} + /* * Returns 0 on success. If the result is negative, it's a Linux error code; * if the result is positive, it's an NVM Express status code @@ -1654,6 +1677,7 @@ static struct blk_mq_ops nvme_mq_ops = { .init_hctx = nvme_init_hctx, .init_request = nvme_init_request, .timeout = nvme_timeout, + .poll = nvme_poll, }; static void nvme_dev_remove_admin(struct nvme_dev *dev)