From patchwork Tue Jun 30 19:13:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 11634623 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C6BFE13B4 for ; Tue, 30 Jun 2020 19:17:29 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9BCE0206A1 for ; Tue, 30 Jun 2020 19:17:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="SzkC8cGN" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9BCE0206A1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:58726 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jqLl2-000443-CE for patchwork-qemu-devel@patchwork.kernel.org; Tue, 30 Jun 2020 15:17:28 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42236) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jqLhS-0006Jm-Dl for qemu-devel@nongnu.org; Tue, 30 Jun 2020 15:13:46 -0400 Received: from us-smtp-1.mimecast.com ([205.139.110.61]:30644 helo=us-smtp-delivery-1.mimecast.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1jqLhQ-0005X3-Mb for qemu-devel@nongnu.org; Tue, 30 Jun 2020 15:13:46 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593544423; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ljXJ2uXsj/Qn3r9/6qBf8Xo9qcaKqcWoVzmWGn+EjKs=; b=SzkC8cGNLws81IvaK1HN02pzTxeI1ZrVz0ZXq1dXdcWg50zUCYOizTOCEqh1rv0/tp83eR 7gjQfisHAMQHsDWJbn98V1sdRJWte6/H6axte4xq1dRcTsGPCdWk/BzFSTQ8pakNZ5BEhw omYmsFuIurXav6T0WlA5BdV2d9TuZ08= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-118-DrC7Z-coM6K4f3TteWka2w-1; Tue, 30 Jun 2020 15:13:42 -0400 X-MC-Unique: DrC7Z-coM6K4f3TteWka2w-1 Received: by mail-wm1-f71.google.com with SMTP id o13so21340584wmh.9 for ; Tue, 30 Jun 2020 12:13:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ljXJ2uXsj/Qn3r9/6qBf8Xo9qcaKqcWoVzmWGn+EjKs=; b=lBO6OqioXeOQMPeEinBM7tavDvtOh0eegngTx6F1C5ACInz/tug+/GjfiDLfRJH4Vb kG9/yDw3tPvGgO01oRnWpv2UHA6Xae5gThPBA38qyOWZayfq4lKTpPO8hr3+Ei5muqj3 4zCZCA3GnZd1+dUpxtDpN0gdXEdpbfdtkj8Jj0mgfuPCYSTIFfxo+K96ox/p2xvZh5aT hWSi6GHzCJguIeQDzuN9Bg28uLtFSvLtuADYGTmqw9/WtEuXch7Lho/BN+p7Yw85YQ+I CnCYeq8DKHQZtudcqo4GIaaFoxgUrTb8aUj9DDjsJ+ZFhjj3uequL2NH1lcNv6E4XMAa Rziw== X-Gm-Message-State: AOAM530R4AI32dvEZGSCqRKLHtuJyTjoTs1Jf0FQ1jwVPkUvXvBHFa+K EWeCj+8pTHPJualAs2Ob9g3jgJyvyOQ8SQWsYL1bGw+lpZa6mCX3f5NTOwlOSeHwM/qvlnpsfZH p4BdITmRPNWI7feE= X-Received: by 2002:a5d:6a90:: with SMTP id s16mr22443601wru.8.1593544420475; Tue, 30 Jun 2020 12:13:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzSyzgvY1w+RsuRX1LZtlarjiZHg7gtqiinAx2X00WNtHkvg4kmiKvFn5hmTgrzaek5JKEzIg== X-Received: by 2002:a5d:6a90:: with SMTP id s16mr22443575wru.8.1593544419910; Tue, 30 Jun 2020 12:13:39 -0700 (PDT) Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net. [83.51.162.1]) by smtp.gmail.com with ESMTPSA id l8sm4663941wrq.15.2020.06.30.12.13.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jun 2020 12:13:39 -0700 (PDT) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: Stefan Hajnoczi , qemu-devel@nongnu.org Subject: [PATCH v2 04/12] block/nvme: Define QUEUE_INDEX macros to ease code review Date: Tue, 30 Jun 2020 21:13:10 +0200 Message-Id: <20200630191318.30021-5-philmd@redhat.com> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200630191318.30021-1-philmd@redhat.com> References: <20200630191318.30021-1-philmd@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=205.139.110.61; envelope-from=philmd@redhat.com; helo=us-smtp-delivery-1.mimecast.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/30 01:11:03 X-ACL-Warn: Detected OS = Linux 2.2.x-3.x [generic] [fuzzy] X-Spam_score_int: -30 X-Spam_score: -3.1 X-Spam_bar: --- X-Spam_report: (-3.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-1, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , qemu-block@nongnu.org, Maxim Levitsky , Max Reitz , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Use definitions instead of '0' or '1' indexes. Also this will be useful when using multi-queues later. Reviewed-by: Stefan Hajnoczi Signed-off-by: Philippe Mathieu-Daudé --- block/nvme.c | 33 +++++++++++++++++++-------------- 1 file changed, 19 insertions(+), 14 deletions(-) diff --git a/block/nvme.c b/block/nvme.c index e1893b4e79..28762d7ee8 100644 --- a/block/nvme.c +++ b/block/nvme.c @@ -103,6 +103,9 @@ typedef volatile struct { QEMU_BUILD_BUG_ON(offsetof(NVMeRegs, doorbells) != 0x1000); +#define QUEUE_INDEX_ADMIN 0 +#define QUEUE_INDEX_IO(n) (1 + n) + struct BDRVNVMeState { AioContext *aio_context; QEMUVFIOState *vfio; @@ -531,7 +534,7 @@ static void nvme_identify(BlockDriverState *bs, int namespace, Error **errp) } cmd.prp1 = cpu_to_le64(iova); - if (nvme_cmd_sync(bs, s->queues[0], &cmd)) { + if (nvme_cmd_sync(bs, s->queues[QUEUE_INDEX_ADMIN], &cmd)) { error_setg(errp, "Failed to identify controller"); goto out; } @@ -555,7 +558,7 @@ static void nvme_identify(BlockDriverState *bs, int namespace, Error **errp) cmd.cdw10 = 0; cmd.nsid = cpu_to_le32(namespace); - if (nvme_cmd_sync(bs, s->queues[0], &cmd)) { + if (nvme_cmd_sync(bs, s->queues[QUEUE_INDEX_ADMIN], &cmd)) { error_setg(errp, "Failed to identify namespace"); goto out; } @@ -644,7 +647,7 @@ static bool nvme_add_io_queue(BlockDriverState *bs, Error **errp) .cdw10 = cpu_to_le32(((queue_size - 1) << 16) | (n & 0xFFFF)), .cdw11 = cpu_to_le32(0x3), }; - if (nvme_cmd_sync(bs, s->queues[0], &cmd)) { + if (nvme_cmd_sync(bs, s->queues[QUEUE_INDEX_ADMIN], &cmd)) { error_setg(errp, "Failed to create io queue [%d]", n); nvme_free_queue_pair(q); return false; @@ -655,7 +658,7 @@ static bool nvme_add_io_queue(BlockDriverState *bs, Error **errp) .cdw10 = cpu_to_le32(((queue_size - 1) << 16) | (n & 0xFFFF)), .cdw11 = cpu_to_le32(0x1 | (n << 16)), }; - if (nvme_cmd_sync(bs, s->queues[0], &cmd)) { + if (nvme_cmd_sync(bs, s->queues[QUEUE_INDEX_ADMIN], &cmd)) { error_setg(errp, "Failed to create io queue [%d]", n); nvme_free_queue_pair(q); return false; @@ -739,16 +742,18 @@ static int nvme_init(BlockDriverState *bs, const char *device, int namespace, /* Set up admin queue. */ s->queues = g_new(NVMeQueuePair *, 1); - s->queues[0] = nvme_create_queue_pair(bs, 0, NVME_QUEUE_SIZE, errp); - if (!s->queues[0]) { + s->queues[QUEUE_INDEX_ADMIN] = nvme_create_queue_pair(bs, 0, + NVME_QUEUE_SIZE, + errp); + if (!s->queues[QUEUE_INDEX_ADMIN]) { ret = -EINVAL; goto out; } s->nr_queues = 1; QEMU_BUILD_BUG_ON(NVME_QUEUE_SIZE & 0xF000); s->regs->aqa = cpu_to_le32((NVME_QUEUE_SIZE << 16) | NVME_QUEUE_SIZE); - s->regs->asq = cpu_to_le64(s->queues[0]->sq.iova); - s->regs->acq = cpu_to_le64(s->queues[0]->cq.iova); + s->regs->asq = cpu_to_le64(s->queues[QUEUE_INDEX_ADMIN]->sq.iova); + s->regs->acq = cpu_to_le64(s->queues[QUEUE_INDEX_ADMIN]->cq.iova); /* After setting up all control registers we can enable device now. */ s->regs->cc = cpu_to_le32((ctz32(NVME_CQ_ENTRY_BYTES) << 20) | @@ -839,7 +844,7 @@ static int nvme_enable_disable_write_cache(BlockDriverState *bs, bool enable, .cdw11 = cpu_to_le32(enable ? 0x01 : 0x00), }; - ret = nvme_cmd_sync(bs, s->queues[0], &cmd); + ret = nvme_cmd_sync(bs, s->queues[QUEUE_INDEX_ADMIN], &cmd); if (ret) { error_setg(errp, "Failed to configure NVMe write cache"); } @@ -1056,7 +1061,7 @@ static coroutine_fn int nvme_co_prw_aligned(BlockDriverState *bs, { int r; BDRVNVMeState *s = bs->opaque; - NVMeQueuePair *ioq = s->queues[1]; + NVMeQueuePair *ioq = s->queues[QUEUE_INDEX_IO(0)]; NVMeRequest *req; uint32_t cdw12 = (((bytes >> s->blkshift) - 1) & 0xFFFF) | @@ -1171,7 +1176,7 @@ static coroutine_fn int nvme_co_pwritev(BlockDriverState *bs, static coroutine_fn int nvme_co_flush(BlockDriverState *bs) { BDRVNVMeState *s = bs->opaque; - NVMeQueuePair *ioq = s->queues[1]; + NVMeQueuePair *ioq = s->queues[QUEUE_INDEX_IO(0)]; NVMeRequest *req; NvmeCmd cmd = { .opcode = NVME_CMD_FLUSH, @@ -1202,7 +1207,7 @@ static coroutine_fn int nvme_co_pwrite_zeroes(BlockDriverState *bs, BdrvRequestFlags flags) { BDRVNVMeState *s = bs->opaque; - NVMeQueuePair *ioq = s->queues[1]; + NVMeQueuePair *ioq = s->queues[QUEUE_INDEX_IO(0)]; NVMeRequest *req; uint32_t cdw12 = ((bytes >> s->blkshift) - 1) & 0xFFFF; @@ -1255,7 +1260,7 @@ static int coroutine_fn nvme_co_pdiscard(BlockDriverState *bs, int bytes) { BDRVNVMeState *s = bs->opaque; - NVMeQueuePair *ioq = s->queues[1]; + NVMeQueuePair *ioq = s->queues[QUEUE_INDEX_IO(0)]; NVMeRequest *req; NvmeDsmRange *buf; QEMUIOVector local_qiov; @@ -1398,7 +1403,7 @@ static void nvme_aio_unplug(BlockDriverState *bs) BDRVNVMeState *s = bs->opaque; assert(s->plugged); s->plugged = false; - for (i = 1; i < s->nr_queues; i++) { + for (i = QUEUE_INDEX_IO(0); i < s->nr_queues; i++) { NVMeQueuePair *q = s->queues[i]; qemu_mutex_lock(&q->lock); nvme_kick(q);