From patchwork Wed Jun 17 21:33:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Fomichev X-Patchwork-Id: 11610721 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E151A913 for ; Wed, 17 Jun 2020 21:43:18 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B751B2166E for ; Wed, 17 Jun 2020 21:43:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="Eh/bHDa9" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B751B2166E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:37832 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jlfq1-0001AV-OC for patchwork-qemu-devel@patchwork.kernel.org; Wed, 17 Jun 2020 17:43:17 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:45886) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfhc-0003aJ-HV; Wed, 17 Jun 2020 17:34:36 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:29837) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfha-0005DU-GZ; Wed, 17 Jun 2020 17:34:36 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1592429674; x=1623965674; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HX0mScZJawcN1wtB9oGEysO0x6z2NlfQ4Kuhi3c7Lc4=; b=Eh/bHDa9a6NU+Sel6hOZrVl4mBqP/W5eHA3oC9SzH3/Y86QDK/4eK4SR qyY/tOAVEtlZbcDyPdDyaOztkM2+rppgD4ljrY8hXaQwQZoJwWJbKMBrE rAsvmRhDuuXNFvIiQXEMoKG9n4gcoYIFLrpYv3Jd46lZbA15DaRpvvsNn KSYouSAr0PTNEoauXjPNH1T6mbYG65NRF7lM22zfaLLrO65/TOPA1gjTu o/7zWReE+d0BLkrYeMG07yKAuHvgNnauCopQsGa/c8RJNt8/isaW8fDgS hoHCgkw/qvGA4LykNopc2EV/4AzFQYNhrARTPY75/roNFau7mx8+6SQFk w==; IronPort-SDR: BrP0Rqx8XR2iFlC8/uM/E7C63R9Pdbsz1J12hQJ7mFys7H5/267NCnj3dovbF40pWnjKFyce68 GaF3cdFiU8W+ZFp3NFDfRHFh4K5hgBEpRPWSqj0T89ZC7vrRkiZR7/tP/JBUC5ab/EvmxLHGil SxzQULDpn5jxLXEjH9VR/tLiiyay21OOnNX0dqiP8n3MN87o3GNArq/CCLePuprzLa6xxNT+mw elhsbHwSjq21bbH/ob0VtUFXBlobUciql7oDw7LFs1IdoTKaJ888dK+xmfqdHEHQDYJn94ZV5D wC8= X-IronPort-AV: E=Sophos;i="5.73,523,1583164800"; d="scan'208";a="249439785" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 18 Jun 2020 05:34:29 +0800 IronPort-SDR: /D740RNZ+/o95QDVBz+NtJBapa9cOrsKn3ALo0NI+Dd8rm9BriWq0fY1xkOTiXDOeg1/mVS7b8 ql4I0V9/Goufphq3dzQhbc+5jvQazQu0A= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 14:23:10 -0700 IronPort-SDR: J6Yy+/u7Zf4QYayjkB5TSGeNs1zv8i7x01rSLal/98U2I8r9u1wifM1L/2YRCvuLVQhRPFsK72 ZVz+W+yinV6A== WDCIronportException: Internal Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 17 Jun 2020 14:34:28 -0700 From: Dmitry Fomichev To: Kevin Wolf , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Maxim Levitsky Subject: [PATCH v2 01/18] hw/block/nvme: Move NvmeRequest has_sg field to a bit flag Date: Thu, 18 Jun 2020 06:33:58 +0900 Message-Id: <20200617213415.22417-2-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200617213415.22417-1-dmitry.fomichev@wdc.com> References: <20200617213415.22417-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=430b82a1d=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/17 17:34:28 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Matias Bjorling Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" In addition to the existing has_sg flag, a few more Boolean NvmeRequest flags are going to be introduced in subsequent patches. Convert "has_sg" variable to "flags" and define NvmeRequestFlags enum for individual flag values. Signed-off-by: Dmitry Fomichev Reviewed-by: Alistair Francis Reviewed-by: Klaus Jensen --- hw/block/nvme.c | 8 +++----- hw/block/nvme.h | 6 +++++- 2 files changed, 8 insertions(+), 6 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 1aee042d4c..3ed9f3d321 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -350,7 +350,7 @@ static void nvme_rw_cb(void *opaque, int ret) block_acct_failed(blk_get_stats(n->conf.blk), &req->acct); req->status = NVME_INTERNAL_DEV_ERROR; } - if (req->has_sg) { + if (req->flags & NVME_REQ_FLG_HAS_SG) { qemu_sglist_destroy(&req->qsg); } nvme_enqueue_req_completion(cq, req); @@ -359,7 +359,6 @@ static void nvme_rw_cb(void *opaque, int ret) static uint16_t nvme_flush(NvmeCtrl *n, NvmeNamespace *ns, NvmeCmd *cmd, NvmeRequest *req) { - req->has_sg = false; block_acct_start(blk_get_stats(n->conf.blk), &req->acct, 0, BLOCK_ACCT_FLUSH); req->aiocb = blk_aio_flush(n->conf.blk, nvme_rw_cb, req); @@ -383,7 +382,6 @@ static uint16_t nvme_write_zeros(NvmeCtrl *n, NvmeNamespace *ns, NvmeCmd *cmd, return NVME_LBA_RANGE | NVME_DNR; } - req->has_sg = false; block_acct_start(blk_get_stats(n->conf.blk), &req->acct, 0, BLOCK_ACCT_WRITE); req->aiocb = blk_aio_pwrite_zeroes(n->conf.blk, offset, count, @@ -422,14 +420,13 @@ static uint16_t nvme_rw(NvmeCtrl *n, NvmeNamespace *ns, NvmeCmd *cmd, dma_acct_start(n->conf.blk, &req->acct, &req->qsg, acct); if (req->qsg.nsg > 0) { - req->has_sg = true; + req->flags |= NVME_REQ_FLG_HAS_SG; req->aiocb = is_write ? dma_blk_write(n->conf.blk, &req->qsg, data_offset, BDRV_SECTOR_SIZE, nvme_rw_cb, req) : dma_blk_read(n->conf.blk, &req->qsg, data_offset, BDRV_SECTOR_SIZE, nvme_rw_cb, req); } else { - req->has_sg = false; req->aiocb = is_write ? blk_aio_pwritev(n->conf.blk, data_offset, &req->iov, 0, nvme_rw_cb, req) : @@ -917,6 +914,7 @@ static void nvme_process_sq(void *opaque) QTAILQ_REMOVE(&sq->req_list, req, entry); QTAILQ_INSERT_TAIL(&sq->out_req_list, req, entry); memset(&req->cqe, 0, sizeof(req->cqe)); + req->flags = 0; req->cqe.cid = cmd.cid; status = sq->sqid ? nvme_io_cmd(n, &cmd, req) : diff --git a/hw/block/nvme.h b/hw/block/nvme.h index 1d30c0bca2..0460cc0e62 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -16,11 +16,15 @@ typedef struct NvmeAsyncEvent { NvmeAerResult result; } NvmeAsyncEvent; +enum NvmeRequestFlags { + NVME_REQ_FLG_HAS_SG = 1 << 0, +}; + typedef struct NvmeRequest { struct NvmeSQueue *sq; BlockAIOCB *aiocb; uint16_t status; - bool has_sg; + uint16_t flags; NvmeCqe cqe; BlockAcctCookie acct; QEMUSGList qsg; From patchwork Wed Jun 17 21:33:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Fomichev X-Patchwork-Id: 11610727 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 90411913 for ; Wed, 17 Jun 2020 21:44:47 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 66C5B2166E for ; Wed, 17 Jun 2020 21:44:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="J5xilHuK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 66C5B2166E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:46342 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jlfrS-0004X6-MM for patchwork-qemu-devel@patchwork.kernel.org; Wed, 17 Jun 2020 17:44:46 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:45912) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfhd-0003bS-KE; Wed, 17 Jun 2020 17:34:37 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:29844) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfhb-0005Dr-6a; Wed, 17 Jun 2020 17:34:37 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1592429674; x=1623965674; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TP+9GhGC6V6UFGFXIelPM1rn6ry/YJFB4JIxShesYoU=; b=J5xilHuK4Xa9GshogIYdN3eKBuqLoSPPNaRxZXPjk7kemrldNuPSGO+I NnsjHpcbqsEAIyN8Oo2bUUsrcjqWo3phEMvtIv0DHa8uNQteqSBEy+6cE TuD7cYVuc27cQ4xLVvXTeyFDiFVYy/dc4ADwqHYSOWZlccXsAv9AYtkOL MIAkuUu1V60VN2c5O+2NC5wczydU8KxjfRc4FU5z07aR0VbyQLB/trp8z YN1cYxiV8//mmPeT9bFN3lILjijS+v62dmUZwVnu45pUBpjxzLwp0gYGQ BH0KPgq0kNDvUvMBWx8VHm6+gLxOx/Qo1lqR+GkdRipOLTNNMVvXpHhhO A==; IronPort-SDR: SiUu3anDmdA9i+gWIdXzQBPpu9MnHba6kZuWixSWlUNPxn5CegNUbNkcD/0duRwtXbiaE+ZhcQ Ul6xOSUxpdpmU3dfGIAKiyg9Rs1CLwKYfMAtQuobkfBhYDb09Ys7qRQ59a0Ts2RwD51mVLkh6y iRrlO5zurqo1CNpRM4j2EZ6IA7euesVza7c+LbPQGs1PiU/5/hfjN49ALRB/XOuMEYOECwMtbN AWMGh4a6j0z92TP1XckwbcKeDHAzrsc3a4DjX3lEFoDIsmQIZBSC8jCbnkRj56jeKhdj3UW2c0 d/w= X-IronPort-AV: E=Sophos;i="5.73,523,1583164800"; d="scan'208";a="249439789" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 18 Jun 2020 05:34:31 +0800 IronPort-SDR: Uj8dOHpAczLcaCzxqw5+YJNuDL+cJHDAL+tJsFGZ7TZApaInAxCkjtNgu2BhzsMPJn900UkCYW 7QAFmkt1xXCDhMRr88cxtj/QBayDo/2t8= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 14:23:12 -0700 IronPort-SDR: h00Lh9Ktzk6vEIDdrED3Zl3dCtUJB1Xf2OJVU0xaRcnHowMxE6Kpa0xww+y4IZ4b5Mk3CrrNqP PVYGpqpsGkWQ== WDCIronportException: Internal Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 17 Jun 2020 14:34:30 -0700 From: Dmitry Fomichev To: Kevin Wolf , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Maxim Levitsky Subject: [PATCH v2 02/18] hw/block/nvme: Define 64 bit cqe.result Date: Thu, 18 Jun 2020 06:33:59 +0900 Message-Id: <20200617213415.22417-3-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200617213415.22417-1-dmitry.fomichev@wdc.com> References: <20200617213415.22417-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=430b82a1d=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/17 17:34:28 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Matias Bjorling Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" From: Ajay Joshi A new write command, Zone Append, is added as a part of Zoned Namespace Command Set. Upon successful completion of this command, the controller returns the start LBA of the performed write operation in cqe.result field. Therefore, the maximum size of this variable needs to be changed from 32 to 64 bit, consuming the reserved 32 bit field that follows the result in CQE struct. Since the existing commands are expected to return a 32 bit LE value, two separate variables, result32 and result64, are now kept in a union. Signed-off-by: Ajay Joshi Signed-off-by: Dmitry Fomichev Reviewed-by: Alistair Francis Reviewed-by: Klaus Jensen --- block/nvme.c | 2 +- block/trace-events | 2 +- hw/block/nvme.c | 6 +++--- include/block/nvme.h | 6 ++++-- 4 files changed, 9 insertions(+), 7 deletions(-) diff --git a/block/nvme.c b/block/nvme.c index eb2f54dd9d..ca245ec574 100644 --- a/block/nvme.c +++ b/block/nvme.c @@ -287,7 +287,7 @@ static inline int nvme_translate_error(const NvmeCqe *c) { uint16_t status = (le16_to_cpu(c->status) >> 1) & 0xFF; if (status) { - trace_nvme_error(le32_to_cpu(c->result), + trace_nvme_error(le64_to_cpu(c->result64), le16_to_cpu(c->sq_head), le16_to_cpu(c->sq_id), le16_to_cpu(c->cid), diff --git a/block/trace-events b/block/trace-events index 29dff8881c..05c1393943 100644 --- a/block/trace-events +++ b/block/trace-events @@ -156,7 +156,7 @@ vxhs_get_creds(const char *cacert, const char *client_key, const char *client_ce # nvme.c nvme_kick(void *s, int queue) "s %p queue %d" nvme_dma_flush_queue_wait(void *s) "s %p" -nvme_error(int cmd_specific, int sq_head, int sqid, int cid, int status) "cmd_specific %d sq_head %d sqid %d cid %d status 0x%x" +nvme_error(uint64_t cmd_specific, int sq_head, int sqid, int cid, int status) "cmd_specific %ld sq_head %d sqid %d cid %d status 0x%x" nvme_process_completion(void *s, int index, int inflight) "s %p queue %d inflight %d" nvme_process_completion_queue_busy(void *s, int index) "s %p queue %d" nvme_complete_command(void *s, int index, int cid) "s %p queue %d cid %d" diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 3ed9f3d321..a1bbc9acde 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -823,7 +823,7 @@ static uint16_t nvme_get_feature(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) return NVME_INVALID_FIELD | NVME_DNR; } - req->cqe.result = result; + req->cqe.result32 = result; return NVME_SUCCESS; } @@ -859,8 +859,8 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) ((dw11 >> 16) & 0xFFFF) + 1, n->params.max_ioqpairs, n->params.max_ioqpairs); - req->cqe.result = cpu_to_le32((n->params.max_ioqpairs - 1) | - ((n->params.max_ioqpairs - 1) << 16)); + req->cqe.result32 = cpu_to_le32((n->params.max_ioqpairs - 1) | + ((n->params.max_ioqpairs - 1) << 16)); break; case NVME_TIMESTAMP: return nvme_set_feature_timestamp(n, cmd); diff --git a/include/block/nvme.h b/include/block/nvme.h index 1720ee1d51..9c3a04dcd7 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -577,8 +577,10 @@ typedef struct NvmeAerResult { } NvmeAerResult; typedef struct NvmeCqe { - uint32_t result; - uint32_t rsvd; + union { + uint64_t result64; + uint32_t result32; + }; uint16_t sq_head; uint16_t sq_id; uint16_t cid; From patchwork Wed Jun 17 21:34:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Fomichev X-Patchwork-Id: 11610741 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D079F912 for ; Wed, 17 Jun 2020 21:49:24 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A26912070A for ; Wed, 17 Jun 2020 21:49:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="TqT7SBJx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A26912070A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:36202 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jlfvv-0004j3-RF for patchwork-qemu-devel@patchwork.kernel.org; Wed, 17 Jun 2020 17:49:23 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:45970) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfhh-0003jc-02; Wed, 17 Jun 2020 17:34:41 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:29831) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfhc-0005By-QC; Wed, 17 Jun 2020 17:34:40 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1592429676; x=1623965676; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VW+dI6TvAaMrKKR91glOG79oHU+lmwb39OiAXEN/4fk=; b=TqT7SBJxVtcEqPn95bK4p3c+0RYoiKCn32X9rqKF6FubTRoZ86WARHKF Vn23ogHj99b73pZtu5Bmzwu+EUfNhZ58hG/S5Z0Z5xE2tC+vczhI5w68Q sFePvoZiwCETJ0j99tUsDS6QhDXhVxbxLgPtIimha6JMIWXy29OBVwYA0 aUZ/kuyfiopOPyYBOSkoG57QJZ1S5p1IIwXsU2onpe9agruDC4XBHRmWE LazbRQ1OklmFcgnWpyf4nOkJF10ArVyvo5S916obd3woN7ScLD5okuziI 5TsPGi2DojEwc0SZpAj3TqXQKZuKPs1gfxzSCLK/P8PA3q1Av6QLhv75o Q==; IronPort-SDR: LcFcDANmQvIHdO6cRadgp9TNRkXGwY5AGL+EILAerRzou+WJJx6KpB0yaEp/5EBe5KUiywoTfc 7pec73PpXKWzfMQ/TzPHDCCHj9WFWF+0uTURKSV+iwgK+9MRAFKQBJUfzxBaS1ke5/bU56dfnG wHZSQcTxXNzQFJ6656k5NXJbSnAY0W7GWzsLhpna6IiZyWM0gkN1BYvA9Iw1iCaUaB+oluFFF9 0j23JY8eIrK8ih+Chje1poBnKv2MJ9IzKWcd/nseSeD3qWgXYqCzAkA1p1jxomB5IeI4f6vGC7 uz4= X-IronPort-AV: E=Sophos;i="5.73,523,1583164800"; d="scan'208";a="249439792" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 18 Jun 2020 05:34:33 +0800 IronPort-SDR: DE3CKNVxw/HIDJZEAfRfiQzuBXgLzOBZl57R7wQNmxCjQqwFKJZiKuvQ9ioyugZ1oZT0avkfBa GhbbyIzhmLne0OWFLSb/o9PF3rE5mkC9o= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 14:23:14 -0700 IronPort-SDR: PVyZEXqMhpOlCrTi2nPGHvEkOBSuZj5CskZFLEuomzUdCVbQXApSte5ZFlyYihDHrPjFNwsOWe jKJ7L1Wsbj/g== WDCIronportException: Internal Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 17 Jun 2020 14:34:31 -0700 From: Dmitry Fomichev To: Kevin Wolf , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Maxim Levitsky Subject: [PATCH v2 03/18] hw/block/nvme: Clean up unused AER definitions Date: Thu, 18 Jun 2020 06:34:00 +0900 Message-Id: <20200617213415.22417-4-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200617213415.22417-1-dmitry.fomichev@wdc.com> References: <20200617213415.22417-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=430b82a1d=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/17 17:34:28 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Matias Bjorling Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Removed unused struct NvmeAerResult and SMART-related async event codes. All other event codes are now categorized by their type. This avoids having to define the same values in a single enum, NvmeAsyncEventRequest, that is now removed. Later commits in this series will define additional values in some of these enums. No functional change. Signed-off-by: Dmitry Fomichev Reviewed-by: Alistair Francis Reviewed-by: Klaus Jensen --- hw/block/nvme.h | 1 - include/block/nvme.h | 43 ++++++++++++++++++++++--------------------- 2 files changed, 22 insertions(+), 22 deletions(-) diff --git a/hw/block/nvme.h b/hw/block/nvme.h index 0460cc0e62..4f0dac39ae 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -13,7 +13,6 @@ typedef struct NvmeParams { typedef struct NvmeAsyncEvent { QSIMPLEQ_ENTRY(NvmeAsyncEvent) entry; - NvmeAerResult result; } NvmeAsyncEvent; enum NvmeRequestFlags { diff --git a/include/block/nvme.h b/include/block/nvme.h index 9c3a04dcd7..3099df99eb 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -553,28 +553,30 @@ typedef struct NvmeDsmRange { uint64_t slba; } NvmeDsmRange; -enum NvmeAsyncEventRequest { - NVME_AER_TYPE_ERROR = 0, - NVME_AER_TYPE_SMART = 1, - NVME_AER_TYPE_IO_SPECIFIC = 6, - NVME_AER_TYPE_VENDOR_SPECIFIC = 7, - NVME_AER_INFO_ERR_INVALID_SQ = 0, - NVME_AER_INFO_ERR_INVALID_DB = 1, - NVME_AER_INFO_ERR_DIAG_FAIL = 2, - NVME_AER_INFO_ERR_PERS_INTERNAL_ERR = 3, - NVME_AER_INFO_ERR_TRANS_INTERNAL_ERR = 4, - NVME_AER_INFO_ERR_FW_IMG_LOAD_ERR = 5, - NVME_AER_INFO_SMART_RELIABILITY = 0, - NVME_AER_INFO_SMART_TEMP_THRESH = 1, - NVME_AER_INFO_SMART_SPARE_THRESH = 2, +enum NvmeAsyncEventType { + NVME_AER_TYPE_ERROR = 0x00, + NVME_AER_TYPE_SMART = 0x01, + NVME_AER_TYPE_NOTICE = 0x02, + NVME_AER_TYPE_CMDSET_SPECIFIC = 0x06, + NVME_AER_TYPE_VENDOR_SPECIFIC = 0x07, }; -typedef struct NvmeAerResult { - uint8_t event_type; - uint8_t event_info; - uint8_t log_page; - uint8_t resv; -} NvmeAerResult; +enum NvmeAsyncErrorInfo { + NVME_AER_ERR_INVALID_SQ = 0x00, + NVME_AER_ERR_INVALID_DB = 0x01, + NVME_AER_ERR_DIAG_FAIL = 0x02, + NVME_AER_ERR_PERS_INTERNAL_ERR = 0x03, + NVME_AER_ERR_TRANS_INTERNAL_ERR = 0x04, + NVME_AER_ERR_FW_IMG_LOAD_ERR = 0x05, +}; + +enum NvmeAsyncNoticeInfo { + NVME_AER_NOTICE_NS_CHANGED = 0x00, +}; + +enum NvmeAsyncEventCfg { + NVME_AEN_CFG_NS_ATTR = 1 << 8, +}; typedef struct NvmeCqe { union { @@ -881,7 +883,6 @@ enum NvmeIdNsDps { static inline void _nvme_check_size(void) { - QEMU_BUILD_BUG_ON(sizeof(NvmeAerResult) != 4); QEMU_BUILD_BUG_ON(sizeof(NvmeCqe) != 16); QEMU_BUILD_BUG_ON(sizeof(NvmeDsmRange) != 16); QEMU_BUILD_BUG_ON(sizeof(NvmeCmd) != 64); From patchwork Wed Jun 17 21:34:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Fomichev X-Patchwork-Id: 11610817 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E508F14E3 for ; Wed, 17 Jun 2020 22:06:24 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BBA482082F for ; Wed, 17 Jun 2020 22:06:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="O9DllsKz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BBA482082F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:37012 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jlgCO-00062y-21 for patchwork-qemu-devel@patchwork.kernel.org; Wed, 17 Jun 2020 18:06:24 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:45956) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfhf-0003fo-Ei; Wed, 17 Jun 2020 17:34:39 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:29837) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfhd-0005DU-7t; Wed, 17 Jun 2020 17:34:39 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1592429676; x=1623965676; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aqgqaLO1ZcNkybwn5HB7GQlLkwoHClg6t6ocDRLv81s=; b=O9DllsKzaIhb/iDuNHTyJc/2UQ/3R8m7CSVhPh+7H1VIZX/1syfvqKDd EqJ7IKYzjAG94qF0jMRZR+8VuVUyVInYRjZF4IasBwYbyGU5oReRy1HKj diRjK4VBEh8pAa/Q224FvDrkBUt9Tf/cI32IT+Od5z4wSMFbt7fbS+Mw5 cmyGR9ueIY7OqpKzN8jzDvjDK5rMjCyeGcQDFtpFnXiv6fk77u7+OcZqY 7nVv/FRpDBo3pI6yiW0dGTrIcQyDbNgIh+ppzoO0iDbIvO6sDWSibX2Ri UkXMIFfoGxoxiLkZI7HmA/o5OGoRutauErNrguvtr/gdruk6r9JAaoswI g==; IronPort-SDR: gXyfMwM3GvpEDv+LKNY+ILdioz+8pgRnpWHAWRKdKkHb0vt3k5mycyA1gpP9hyVsbPCwHmyrms NFCqUt7vJtB4BS0sCzaSontL8cCIfxWbBOuRTm0/dMeE2iCjrZ5vnUUQHvpPDqCeO+gyAwFdfd CK0E7jDHj1mEsomRj6NLWgdZqQNsJTPNemyZNNtV9y/rGpXNlAGssxMygH79NGscwh19aapFEy B/3sBqTqrJsLdpAOAeUJdupsj7rJFBt4yOfswileXcV13HI0c38/0/qNFZB1V8HSK+hquqqKTg fko= X-IronPort-AV: E=Sophos;i="5.73,523,1583164800"; d="scan'208";a="249439796" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 18 Jun 2020 05:34:35 +0800 IronPort-SDR: /j9pauwMqTfyxF2AJLczocKSD+4QeDzme8iweFXFXyp12Om7f4K27ymxIkjtJ8ma5q8FqM43Kg eGWGSXAl3xuBeKrVMmfj5eZHAXhKmLWrs= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 14:23:16 -0700 IronPort-SDR: RXZbLbNO2L+8aoDA/8kGOgeP/C1DU/k5Uosx3JCTU4VLj+0Lp1SEuTkaURo5agUru2v7DXUhpe KDbcCK97fIvQ== WDCIronportException: Internal Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 17 Jun 2020 14:34:33 -0700 From: Dmitry Fomichev To: Kevin Wolf , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Maxim Levitsky Subject: [PATCH v2 04/18] hw/block/nvme: Add Commands Supported and Effects log Date: Thu, 18 Jun 2020 06:34:01 +0900 Message-Id: <20200617213415.22417-5-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200617213415.22417-1-dmitry.fomichev@wdc.com> References: <20200617213415.22417-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=430b82a1d=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/17 17:34:28 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Matias Bjorling Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" This log page becomes necessary to implement to allow checking for Zone Append command support in Zoned Namespace Command Set. This commit adds the code to report this log page for NVM Command Set only. The parts that are specific to zoned operation will be added later in the series. Signed-off-by: Dmitry Fomichev Acked-by: Alistair Francis --- hw/block/nvme.c | 62 +++++++++++++++++++++++++++++++++++++++++++ hw/block/trace-events | 4 +++ include/block/nvme.h | 18 +++++++++++++ 3 files changed, 84 insertions(+) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index a1bbc9acde..03b8deee85 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -871,6 +871,66 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) return NVME_SUCCESS; } +static uint16_t nvme_handle_cmd_effects(NvmeCtrl *n, NvmeCmd *cmd, + uint64_t prp1, uint64_t prp2, uint64_t ofs, uint32_t len) +{ + NvmeEffectsLog cmd_eff_log = {}; + uint32_t *iocs = cmd_eff_log.iocs; + + trace_pci_nvme_cmd_supp_and_effects_log_read(); + + if (ofs != 0) { + trace_pci_nvme_err_invalid_effects_log_offset(ofs); + return NVME_INVALID_FIELD | NVME_DNR; + } + if (len != sizeof(cmd_eff_log)) { + trace_pci_nvme_err_invalid_effects_log_len(len); + return NVME_INVALID_FIELD | NVME_DNR; + } + + iocs[NVME_ADM_CMD_DELETE_SQ] = NVME_CMD_EFFECTS_CSUPP; + iocs[NVME_ADM_CMD_CREATE_SQ] = NVME_CMD_EFFECTS_CSUPP; + iocs[NVME_ADM_CMD_DELETE_CQ] = NVME_CMD_EFFECTS_CSUPP; + iocs[NVME_ADM_CMD_CREATE_CQ] = NVME_CMD_EFFECTS_CSUPP; + iocs[NVME_ADM_CMD_IDENTIFY] = NVME_CMD_EFFECTS_CSUPP; + iocs[NVME_ADM_CMD_SET_FEATURES] = NVME_CMD_EFFECTS_CSUPP; + iocs[NVME_ADM_CMD_GET_FEATURES] = NVME_CMD_EFFECTS_CSUPP; + iocs[NVME_ADM_CMD_GET_LOG_PAGE] = NVME_CMD_EFFECTS_CSUPP; + + iocs[NVME_CMD_FLUSH] = NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC; + iocs[NVME_CMD_WRITE_ZEROS] = NVME_CMD_EFFECTS_CSUPP | + NVME_CMD_EFFECTS_LBCC; + iocs[NVME_CMD_WRITE] = NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC; + iocs[NVME_CMD_READ] = NVME_CMD_EFFECTS_CSUPP; + + return nvme_dma_read_prp(n, (uint8_t *)&cmd_eff_log, len, prp1, prp2); +} + +static uint16_t nvme_get_log_page(NvmeCtrl *n, NvmeCmd *cmd) +{ + uint64_t prp1 = le64_to_cpu(cmd->prp1); + uint64_t prp2 = le64_to_cpu(cmd->prp2); + uint32_t dw10 = le32_to_cpu(cmd->cdw10); + uint32_t dw11 = le32_to_cpu(cmd->cdw11); + uint64_t dw12 = le32_to_cpu(cmd->cdw12); + uint64_t dw13 = le32_to_cpu(cmd->cdw13); + uint64_t ofs = (dw13 << 32) | dw12; + uint32_t numdl, numdu, len; + uint16_t lid = dw10 & 0xff; + + numdl = dw10 >> 16; + numdu = dw11 & 0xffff; + len = (((numdu << 16) | numdl) + 1) << 2; + + switch (lid) { + case NVME_LOG_CMD_EFFECTS: + return nvme_handle_cmd_effects(n, cmd, prp1, prp2, ofs, len); + } + + trace_pci_nvme_unsupported_log_page(lid); + return NVME_INVALID_FIELD | NVME_DNR; +} + static uint16_t nvme_admin_cmd(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) { switch (cmd->opcode) { @@ -888,6 +948,8 @@ static uint16_t nvme_admin_cmd(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) return nvme_set_feature(n, cmd, req); case NVME_ADM_CMD_GET_FEATURES: return nvme_get_feature(n, cmd, req); + case NVME_ADM_CMD_GET_LOG_PAGE: + return nvme_get_log_page(n, cmd); default: trace_pci_nvme_err_invalid_admin_opc(cmd->opcode); return NVME_INVALID_OPCODE | NVME_DNR; diff --git a/hw/block/trace-events b/hw/block/trace-events index 958fcc5508..423d491e27 100644 --- a/hw/block/trace-events +++ b/hw/block/trace-events @@ -58,6 +58,7 @@ pci_nvme_mmio_start_success(void) "setting controller enable bit succeeded" pci_nvme_mmio_stopped(void) "cleared controller enable bit" pci_nvme_mmio_shutdown_set(void) "shutdown bit set" pci_nvme_mmio_shutdown_cleared(void) "shutdown bit cleared" +pci_nvme_cmd_supp_and_effects_log_read(void) "commands supported and effects log read" # nvme traces for error conditions pci_nvme_err_invalid_dma(void) "PRP/SGL is too small for transfer size" @@ -69,6 +70,8 @@ pci_nvme_err_invalid_ns(uint32_t ns, uint32_t limit) "invalid namespace %u not w pci_nvme_err_invalid_opc(uint8_t opc) "invalid opcode 0x%"PRIx8"" pci_nvme_err_invalid_admin_opc(uint8_t opc) "invalid admin opcode 0x%"PRIx8"" pci_nvme_err_invalid_lba_range(uint64_t start, uint64_t len, uint64_t limit) "Invalid LBA start=%"PRIu64" len=%"PRIu64" limit=%"PRIu64"" +pci_nvme_err_invalid_effects_log_offset(uint64_t ofs) "commands supported and effects log offset must be 0, got %"PRIu64"" +pci_nvme_err_invalid_effects_log_len(uint32_t len) "commands supported and effects log size is 4096, got %"PRIu32"" pci_nvme_err_invalid_del_sq(uint16_t qid) "invalid submission queue deletion, sid=%"PRIu16"" pci_nvme_err_invalid_create_sq_cqid(uint16_t cqid) "failed creating submission queue, invalid cqid=%"PRIu16"" pci_nvme_err_invalid_create_sq_sqid(uint16_t sqid) "failed creating submission queue, invalid sqid=%"PRIu16"" @@ -123,6 +126,7 @@ pci_nvme_ub_db_wr_invalid_cq(uint32_t qid) "completion queue doorbell write for pci_nvme_ub_db_wr_invalid_cqhead(uint32_t qid, uint16_t new_head) "completion queue doorbell write value beyond queue size, cqid=%"PRIu32", new_head=%"PRIu16", ignoring" pci_nvme_ub_db_wr_invalid_sq(uint32_t qid) "submission queue doorbell write for nonexistent queue, sqid=%"PRIu32", ignoring" pci_nvme_ub_db_wr_invalid_sqtail(uint32_t qid, uint16_t new_tail) "submission queue doorbell write value beyond queue size, sqid=%"PRIu32", new_head=%"PRIu16", ignoring" +pci_nvme_unsupported_log_page(uint16_t lid) "unsupported log page 0x%"PRIx16"" # xen-block.c xen_block_realize(const char *type, uint32_t disk, uint32_t partition) "%s d%up%u" diff --git a/include/block/nvme.h b/include/block/nvme.h index 3099df99eb..6a58bac0c2 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -691,10 +691,27 @@ enum NvmeSmartWarn { NVME_SMART_FAILED_VOLATILE_MEDIA = 1 << 4, }; +typedef struct NvmeEffectsLog { + uint32_t acs[256]; + uint32_t iocs[256]; + uint8_t resv[2048]; +} NvmeEffectsLog; + +enum { + NVME_CMD_EFFECTS_CSUPP = 1 << 0, + NVME_CMD_EFFECTS_LBCC = 1 << 1, + NVME_CMD_EFFECTS_NCC = 1 << 2, + NVME_CMD_EFFECTS_NIC = 1 << 3, + NVME_CMD_EFFECTS_CCC = 1 << 4, + NVME_CMD_EFFECTS_CSE_MASK = 3 << 16, + NVME_CMD_EFFECTS_UUID_SEL = 1 << 19, +}; + enum LogIdentifier { NVME_LOG_ERROR_INFO = 0x01, NVME_LOG_SMART_INFO = 0x02, NVME_LOG_FW_SLOT_INFO = 0x03, + NVME_LOG_CMD_EFFECTS = 0x05, }; typedef struct NvmePSD { @@ -898,5 +915,6 @@ static inline void _nvme_check_size(void) QEMU_BUILD_BUG_ON(sizeof(NvmeSmartLog) != 512); QEMU_BUILD_BUG_ON(sizeof(NvmeIdCtrl) != 4096); QEMU_BUILD_BUG_ON(sizeof(NvmeIdNs) != 4096); + QEMU_BUILD_BUG_ON(sizeof(NvmeEffectsLog) != 4096); } #endif From patchwork Wed Jun 17 21:34:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Fomichev X-Patchwork-Id: 11610735 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D5F45912 for ; Wed, 17 Jun 2020 21:47:21 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AA9A52070A for ; Wed, 17 Jun 2020 21:47:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="CNliOncC" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AA9A52070A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:55950 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jlftw-0000fP-OZ for patchwork-qemu-devel@patchwork.kernel.org; Wed, 17 Jun 2020 17:47:20 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:45982) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfhi-0003ls-0k; Wed, 17 Jun 2020 17:34:42 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:29844) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfhe-0005Dr-KZ; Wed, 17 Jun 2020 17:34:41 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1592429678; x=1623965678; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=megqPJFM56PH7GfxtsYVlj7LtNTISymaYwWv1hDN270=; b=CNliOncCc+8SaQph97gLSLdwwA0/ql6ts4+26zGKIFxeJeqMmQRMRADx 0GlvFY7jfKzUyoM8afFFbpCv8d6SQ5xQQv3nLFovwGr3JavumzY0ZVJju v+2HeCAhgUOWWMXh3u45deCfCjSOB7vMJ3yvVz8Qa4LDhy94rYOvCsAxE amqziUaP1xydk1Hqc6nj8VkarMpoNjLsY7DPL9gUlPza4ZvG00Mk1AsJf AP0UkOdvtEslFSisdGVs4fn63JgXSbLFTGYe/5BqRbsF6zGnAFmE/3Wcc K1Irz27nqlh/5usN16nTGp7U4kT+KNxAQhiPt40lmxQRLgk/NtMvXV7VL Q==; IronPort-SDR: qOiAWjTvLj+1LitqeN28voJ/AfKNDkhDW3HvAeGC73xvIuC/s4GkEo32IT2dEXMFz/fCYrq6Ls Al/PFPoknZQh47n3Lod5F5E8qx5iXpKTpW9ZjhemWtQ5P2DkIVouSlbxlChpaaVOuu4Forqy9K NQOW2GLuWa5b6tWmCzDh810CFsDft2p0BigzvqmWE67BxwgI1SR4byeNHESLBbiFGCqWurd7mC XqiX2VXsxCbt/DrLCLk1CdNLwWat7YdO0GbB97tiYlyqd3mWOF9n5j3HzSYGs/KgsuBd8vNE+V 7dQ= X-IronPort-AV: E=Sophos;i="5.73,523,1583164800"; d="scan'208";a="249439800" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 18 Jun 2020 05:34:37 +0800 IronPort-SDR: +BA11JWlF5YGnfXhX0xtkM+ajebu992yb6KCjQfb85k1gXQUAhc4uW65IPb3cKYLCNVncjrMdu m+jdPrJTDZqsYxIF1+2vWnD42UK5FNTfQ= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 14:23:18 -0700 IronPort-SDR: /ApIvmCPzs9GNRDG4G/4Ev3b9ae6Q5PJOKKrn55T6weLWaeMApDtTkQExC1/gC8AqP1HFs2AP8 mXSQxlT2uwnA== WDCIronportException: Internal Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 17 Jun 2020 14:34:35 -0700 From: Dmitry Fomichev To: Kevin Wolf , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Maxim Levitsky Subject: [PATCH v2 05/18] hw/block/nvme: Introduce the Namespace Types definitions Date: Thu, 18 Jun 2020 06:34:02 +0900 Message-Id: <20200617213415.22417-6-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200617213415.22417-1-dmitry.fomichev@wdc.com> References: <20200617213415.22417-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=430b82a1d=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/17 17:34:28 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Matias Bjorling Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" From: Niklas Cassel Define the structures and constants required to implement Namespace Types support. Signed-off-by: Niklas Cassel Signed-off-by: Dmitry Fomichev Reviewed-by: Klaus Jensen --- hw/block/nvme.h | 3 ++ include/block/nvme.h | 75 +++++++++++++++++++++++++++++++++++++++++--- 2 files changed, 73 insertions(+), 5 deletions(-) diff --git a/hw/block/nvme.h b/hw/block/nvme.h index 4f0dac39ae..4fd155c409 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -63,6 +63,9 @@ typedef struct NvmeCQueue { typedef struct NvmeNamespace { NvmeIdNs id_ns; + uint32_t nsid; + uint8_t csi; + QemuUUID uuid; } NvmeNamespace; static inline NvmeLBAF *nvme_ns_lbaf(NvmeNamespace *ns) diff --git a/include/block/nvme.h b/include/block/nvme.h index 6a58bac0c2..5a1e5e137c 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -50,6 +50,11 @@ enum NvmeCapMask { CAP_PMR_MASK = 0x1, }; +enum NvmeCapCssBits { + CAP_CSS_NVM = 0x01, + CAP_CSS_CSI_SUPP = 0x40, +}; + #define NVME_CAP_MQES(cap) (((cap) >> CAP_MQES_SHIFT) & CAP_MQES_MASK) #define NVME_CAP_CQR(cap) (((cap) >> CAP_CQR_SHIFT) & CAP_CQR_MASK) #define NVME_CAP_AMS(cap) (((cap) >> CAP_AMS_SHIFT) & CAP_AMS_MASK) @@ -101,6 +106,12 @@ enum NvmeCcMask { CC_IOCQES_MASK = 0xf, }; +enum NvmeCcCss { + CSS_NVM_ONLY = 0, + CSS_ALL_NSTYPES = 6, + CSS_ADMIN_ONLY = 7, +}; + #define NVME_CC_EN(cc) ((cc >> CC_EN_SHIFT) & CC_EN_MASK) #define NVME_CC_CSS(cc) ((cc >> CC_CSS_SHIFT) & CC_CSS_MASK) #define NVME_CC_MPS(cc) ((cc >> CC_MPS_SHIFT) & CC_MPS_MASK) @@ -109,6 +120,21 @@ enum NvmeCcMask { #define NVME_CC_IOSQES(cc) ((cc >> CC_IOSQES_SHIFT) & CC_IOSQES_MASK) #define NVME_CC_IOCQES(cc) ((cc >> CC_IOCQES_SHIFT) & CC_IOCQES_MASK) +#define NVME_SET_CC_EN(cc, val) \ + (cc |= (uint32_t)((val) & CC_EN_MASK) << CC_EN_SHIFT) +#define NVME_SET_CC_CSS(cc, val) \ + (cc |= (uint32_t)((val) & CC_CSS_MASK) << CC_CSS_SHIFT) +#define NVME_SET_CC_MPS(cc, val) \ + (cc |= (uint32_t)((val) & CC_MPS_MASK) << CC_MPS_SHIFT) +#define NVME_SET_CC_AMS(cc, val) \ + (cc |= (uint32_t)((val) & CC_AMS_MASK) << CC_AMS_SHIFT) +#define NVME_SET_CC_SHN(cc, val) \ + (cc |= (uint32_t)((val) & CC_SHN_MASK) << CC_SHN_SHIFT) +#define NVME_SET_CC_IOSQES(cc, val) \ + (cc |= (uint32_t)((val) & CC_IOSQES_MASK) << CC_IOSQES_SHIFT) +#define NVME_SET_CC_IOCQES(cc, val) \ + (cc |= (uint32_t)((val) & CC_IOCQES_MASK) << CC_IOCQES_SHIFT) + enum NvmeCstsShift { CSTS_RDY_SHIFT = 0, CSTS_CFS_SHIFT = 1, @@ -482,10 +508,41 @@ typedef struct NvmeIdentify { uint64_t rsvd2[2]; uint64_t prp1; uint64_t prp2; - uint32_t cns; - uint32_t rsvd11[5]; + uint8_t cns; + uint8_t rsvd4; + uint16_t ctrlid; + uint16_t nvmsetid; + uint8_t rsvd3; + uint8_t csi; + uint32_t rsvd12[4]; } NvmeIdentify; +typedef struct NvmeNsIdDesc { + uint8_t nidt; + uint8_t nidl; + uint16_t rsvd2; +} NvmeNsIdDesc; + +enum NvmeNidType { + NVME_NIDT_EUI64 = 0x01, + NVME_NIDT_NGUID = 0x02, + NVME_NIDT_UUID = 0x03, + NVME_NIDT_CSI = 0x04, +}; + +enum NvmeNidLength { + NVME_NIDL_EUI64 = 8, + NVME_NIDL_NGUID = 16, + NVME_NIDL_UUID = 16, + NVME_NIDL_CSI = 1, +}; + +enum NvmeCsi { + NVME_CSI_NVM = 0x00, +}; + +#define NVME_SET_CSI(vec, csi) (vec |= (uint8_t)(1 << (csi))) + typedef struct NvmeRwCmd { uint8_t opcode; uint8_t flags; @@ -603,6 +660,7 @@ enum NvmeStatusCodes { NVME_CMD_ABORT_MISSING_FUSE = 0x000a, NVME_INVALID_NSID = 0x000b, NVME_CMD_SEQ_ERROR = 0x000c, + NVME_CMD_SET_CMB_REJECTED = 0x002b, NVME_LBA_RANGE = 0x0080, NVME_CAP_EXCEEDED = 0x0081, NVME_NS_NOT_READY = 0x0082, @@ -729,9 +787,14 @@ typedef struct NvmePSD { #define NVME_IDENTIFY_DATA_SIZE 4096 enum { - NVME_ID_CNS_NS = 0x0, - NVME_ID_CNS_CTRL = 0x1, - NVME_ID_CNS_NS_ACTIVE_LIST = 0x2, + NVME_ID_CNS_NS = 0x0, + NVME_ID_CNS_CTRL = 0x1, + NVME_ID_CNS_NS_ACTIVE_LIST = 0x2, + NVME_ID_CNS_NS_DESC_LIST = 0x03, + NVME_ID_CNS_CS_NS = 0x05, + NVME_ID_CNS_CS_CTRL = 0x06, + NVME_ID_CNS_CS_NS_ACTIVE_LIST = 0x07, + NVME_ID_CNS_IO_COMMAND_SET = 0x1c, }; typedef struct NvmeIdCtrl { @@ -825,6 +888,7 @@ enum NvmeFeatureIds { NVME_WRITE_ATOMICITY = 0xa, NVME_ASYNCHRONOUS_EVENT_CONF = 0xb, NVME_TIMESTAMP = 0xe, + NVME_COMMAND_SET_PROFILE = 0x19, NVME_SOFTWARE_PROGRESS_MARKER = 0x80 }; @@ -914,6 +978,7 @@ static inline void _nvme_check_size(void) QEMU_BUILD_BUG_ON(sizeof(NvmeFwSlotInfoLog) != 512); QEMU_BUILD_BUG_ON(sizeof(NvmeSmartLog) != 512); QEMU_BUILD_BUG_ON(sizeof(NvmeIdCtrl) != 4096); + QEMU_BUILD_BUG_ON(sizeof(NvmeNsIdDesc) != 4); QEMU_BUILD_BUG_ON(sizeof(NvmeIdNs) != 4096); QEMU_BUILD_BUG_ON(sizeof(NvmeEffectsLog) != 4096); } From patchwork Wed Jun 17 21:34:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Fomichev X-Patchwork-Id: 11610739 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5C6E7912 for ; Wed, 17 Jun 2020 21:47:26 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 340E02070A for ; Wed, 17 Jun 2020 21:47:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="qBCAFHPv" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 340E02070A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:56348 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jlfu1-0000om-DS for patchwork-qemu-devel@patchwork.kernel.org; Wed, 17 Jun 2020 17:47:25 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:45990) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfhi-0003nq-R8; Wed, 17 Jun 2020 17:34:42 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:29837) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfhg-0005DU-56; Wed, 17 Jun 2020 17:34:42 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1592429679; x=1623965679; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9XbVUSXZTerXVjhktMLUPFM/BRnSv5jdmytB77i3T/s=; b=qBCAFHPvN6YMzFDmH+m/n+BK7mQ+hhnHYDHWEaqLkyCh/KEOggiufpFF i35cg1yl7Hze0ejZBUhKPV/Mk2PFZFxA6q2O6NQjG7AQzMZAVStvgEyQe dSAVCe9sZCiahEpEDuO+gHCeGMsmk+ger6F9v4TgCzokWwvwrhZTYlUwt K5j9oV7OrsbEp/hL/A/1cIDZCikbxfqrQY72QtWcjwBB5WAnP24nlEOZw TRfAbJ8puhjl45LD9c2kDpm9jr3jg+PzTA0MTZVLpriXQnFrXIqCT3ADV YW+xg8dovIyknpZljVtosa1PTQPluqD4kd4w0a7y1cmVPaZ9ZnEFrfFT6 Q==; IronPort-SDR: ycs0lHR3QlI5YmA+tVyoNDpO0v7WQ/UornFfw9+3vBOEdE6V+QpCCKlFha16+2Y+KGfsfKn+jz RekifCeQRvWX1MdasYO010L/eecVdBJaxp9s3NRBwUFXE7iKnn/4QnxmDXmt4QYM5jysnE60Vh cOjEYPoQ9J+Ubgp7RoApYJ1RbhvseA72yOEhVGXMO03nCslcmUbVr6ppbWXS45pTxTnqnK161c dZJFcCcZB8AaWHseEY3ZgejpcMwgpTjf8CauTlUK2ffYCVQEXMPf+7ZFJTc7BhVCuuErsrWUwt 6KQ= X-IronPort-AV: E=Sophos;i="5.73,523,1583164800"; d="scan'208";a="249439803" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 18 Jun 2020 05:34:38 +0800 IronPort-SDR: t/vhM23xocdZ+AYSLbL60kmwG4vLtFFjm7BHp/oIVsRAqPG5yfMHc47CzsGvfT91iUkU6CRATw DEhuwX6HbevZG2hJP3/AE3AJVmgStGmYQ= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 14:23:19 -0700 IronPort-SDR: mAzXKkE5FWXGaC32XLb+fiE8vFn6bINncoGKellhfu55397vyhCgifMGJYsdN/liboIsGOwhfh 2NuDqBf5Fy+Q== WDCIronportException: Internal Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 17 Jun 2020 14:34:37 -0700 From: Dmitry Fomichev To: Kevin Wolf , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Maxim Levitsky Subject: [PATCH v2 06/18] hw/block/nvme: Define trace events related to NS Types Date: Thu, 18 Jun 2020 06:34:03 +0900 Message-Id: <20200617213415.22417-7-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200617213415.22417-1-dmitry.fomichev@wdc.com> References: <20200617213415.22417-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=430b82a1d=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/17 17:34:28 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Matias Bjorling Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" A few trace events are defined that are relevant to implementing Namespace Types (NVMe TP 4056). Signed-off-by: Dmitry Fomichev Reviewed-by: Klaus Jensen Reviewed-by: Alistair Francis --- hw/block/trace-events | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/hw/block/trace-events b/hw/block/trace-events index 423d491e27..3f3323fe38 100644 --- a/hw/block/trace-events +++ b/hw/block/trace-events @@ -39,8 +39,13 @@ pci_nvme_create_cq(uint64_t addr, uint16_t cqid, uint16_t vector, uint16_t size, pci_nvme_del_sq(uint16_t qid) "deleting submission queue sqid=%"PRIu16"" pci_nvme_del_cq(uint16_t cqid) "deleted completion queue, cqid=%"PRIu16"" pci_nvme_identify_ctrl(void) "identify controller" +pci_nvme_identify_ctrl_csi(uint8_t csi) "identify controller, csi=0x%"PRIx8"" pci_nvme_identify_ns(uint16_t ns) "identify namespace, nsid=%"PRIu16"" +pci_nvme_identify_ns_csi(uint16_t ns, uint8_t csi) "identify namespace, nsid=%"PRIu16", csi=0x%"PRIx8"" pci_nvme_identify_nslist(uint16_t ns) "identify namespace list, nsid=%"PRIu16"" +pci_nvme_identify_nslist_csi(uint16_t ns, uint8_t csi) "identify namespace list, nsid=%"PRIu16", csi=0x%"PRIx8"" +pci_nvme_list_ns_descriptors(void) "identify namespace descriptors" +pci_nvme_identify_cmd_set(void) "identify i/o command set" pci_nvme_getfeat_vwcache(const char* result) "get feature volatile write cache, result=%s" pci_nvme_getfeat_numq(int result) "get feature number of queues, result=%d" pci_nvme_setfeat_numq(int reqcq, int reqsq, int gotcq, int gotsq) "requested cq_count=%d sq_count=%d, responding with cq_count=%d sq_count=%d" @@ -59,6 +64,8 @@ pci_nvme_mmio_stopped(void) "cleared controller enable bit" pci_nvme_mmio_shutdown_set(void) "shutdown bit set" pci_nvme_mmio_shutdown_cleared(void) "shutdown bit cleared" pci_nvme_cmd_supp_and_effects_log_read(void) "commands supported and effects log read" +pci_nvme_css_nvm_cset_selected_by_host(uint32_t cc) "NVM command set selected by host, bar.cc=0x%"PRIx32"" +pci_nvme_css_all_csets_sel_by_host(uint32_t cc) "all supported command sets selected by host, bar.cc=0x%"PRIx32"" # nvme traces for error conditions pci_nvme_err_invalid_dma(void) "PRP/SGL is too small for transfer size" @@ -72,6 +79,9 @@ pci_nvme_err_invalid_admin_opc(uint8_t opc) "invalid admin opcode 0x%"PRIx8"" pci_nvme_err_invalid_lba_range(uint64_t start, uint64_t len, uint64_t limit) "Invalid LBA start=%"PRIu64" len=%"PRIu64" limit=%"PRIu64"" pci_nvme_err_invalid_effects_log_offset(uint64_t ofs) "commands supported and effects log offset must be 0, got %"PRIu64"" pci_nvme_err_invalid_effects_log_len(uint32_t len) "commands supported and effects log size is 4096, got %"PRIu32"" +pci_nvme_err_change_css_when_enabled(void) "changing CC.CSS while controller is enabled" +pci_nvme_err_only_nvm_cmd_set_avail(void) "setting 110b CC.CSS, but only NVM command set is enabled" +pci_nvme_err_invalid_iocsci(uint32_t idx) "unsupported command set combination index %"PRIu32"" pci_nvme_err_invalid_del_sq(uint16_t qid) "invalid submission queue deletion, sid=%"PRIu16"" pci_nvme_err_invalid_create_sq_cqid(uint16_t cqid) "failed creating submission queue, invalid cqid=%"PRIu16"" pci_nvme_err_invalid_create_sq_sqid(uint16_t sqid) "failed creating submission queue, invalid sqid=%"PRIu16"" @@ -127,6 +137,7 @@ pci_nvme_ub_db_wr_invalid_cqhead(uint32_t qid, uint16_t new_head) "completion qu pci_nvme_ub_db_wr_invalid_sq(uint32_t qid) "submission queue doorbell write for nonexistent queue, sqid=%"PRIu32", ignoring" pci_nvme_ub_db_wr_invalid_sqtail(uint32_t qid, uint16_t new_tail) "submission queue doorbell write value beyond queue size, sqid=%"PRIu32", new_head=%"PRIu16", ignoring" pci_nvme_unsupported_log_page(uint16_t lid) "unsupported log page 0x%"PRIx16"" +pci_nvme_ub_unknown_css_value(void) "unknown value in cc.css field" # xen-block.c xen_block_realize(const char *type, uint32_t disk, uint32_t partition) "%s d%up%u" From patchwork Wed Jun 17 21:34:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Fomichev X-Patchwork-Id: 11610745 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4441C912 for ; Wed, 17 Jun 2020 21:49:35 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0899A2168B for ; Wed, 17 Jun 2020 21:49:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="GfGe6Ltv" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0899A2168B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:37110 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jlfw6-00055Q-5H for patchwork-qemu-devel@patchwork.kernel.org; Wed, 17 Jun 2020 17:49:34 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46010) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfhk-0003sV-Nd; Wed, 17 Jun 2020 17:34:44 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:29831) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfhi-0005By-1X; Wed, 17 Jun 2020 17:34:44 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1592429681; x=1623965681; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=BTlEYW/OIGNe1O/+CzIdwtV8VpzfF9Y5asbK1bA+GI8=; b=GfGe6LtvhXHf0DZSyOhoOPZuPlJ7cUzkJujI5fLFKFCoiAXMi3h2S01T wNviQ7YCYXCyFBTUlozvV62AdBgO1i4rj0hcF/Acr8OGvKUVETvIlcli7 yeSDUxQwviUBZT9Z7F3yLEt0TEmrVZoRF9zcMh/1IyuWH19BxfVR9X2WF LvEm5PyWKqcOk5G+e/Qbtxn2mjrYn0KobGiDOLdkhpes7of/TLJdSfQVJ Fqrs7SrTW9wit/o4fuZbYEaqzQ/WMyLSH80jm4aDEblprAbOUcX3r2tQm sLLIu8o8zwWZga6inqsQ1b5MivfZurFUzj22cfAw7UC8lzRBmxIOBTiUB A==; IronPort-SDR: HczuxzW3FIA+UoqAyhysVRbMqpGsjPI0+KJmMrEVd128jJ3s8e3N4GxY9jKoVDrWV4TJ5yKZdv m1rgE/BknpXyMHO8/rSBy8GZbd1HKsg5VQJK0j1dHdn7FXVigUNgslNKQBk9oaEDKO4r8w3g+3 OH2aUO61W7veBVa8208iz2YRTGdJ6xM47yoOJ3tCenRJ+IueZsUwzYOrrBITL2snHyFITDWRo7 M+A08wR+Xo3nIbe9Lz2axXs/5dPWL586/m9t2dW2CMjaIL5H9wfLAI8KK+t3f6BOQ35DoY2RQL 1NY= X-IronPort-AV: E=Sophos;i="5.73,523,1583164800"; d="scan'208";a="249439807" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 18 Jun 2020 05:34:40 +0800 IronPort-SDR: cho6mr2Lb9CH+QKBjZt5iiKNgPQN0XPjRdLJlgVBe7yFlqF5v/h12ntRbMuOBwb6mq0EHodOtO 0V0Z9akgVfANyXt4V0BJdx5Ck/R+Q4hlA= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 14:23:21 -0700 IronPort-SDR: xi7P+Pp5rAeX3cWRJgFBY3ZZU2T4yzgfcyMDOgXAyc7xFLRCfJ7TH8b6IcjZijUMPWwpYOZ3Km cjv0lzn2G8bg== WDCIronportException: Internal Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 17 Jun 2020 14:34:39 -0700 From: Dmitry Fomichev To: Kevin Wolf , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Maxim Levitsky Subject: [PATCH v2 07/18] hw/block/nvme: Add support for Namespace Types Date: Thu, 18 Jun 2020 06:34:04 +0900 Message-Id: <20200617213415.22417-8-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200617213415.22417-1-dmitry.fomichev@wdc.com> References: <20200617213415.22417-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=430b82a1d=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/17 17:34:28 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Matias Bjorling Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" From: Niklas Cassel Namespace Types introduce a new command set, "I/O Command Sets", that allows the host to retrieve the command sets associated with a namespace. Introduce support for the command set, and enable detection for the NVM Command Set. Signed-off-by: Niklas Cassel Signed-off-by: Dmitry Fomichev --- hw/block/nvme.c | 210 ++++++++++++++++++++++++++++++++++++++++++++++-- hw/block/nvme.h | 11 +++ 2 files changed, 216 insertions(+), 5 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 03b8deee85..453f4747a5 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -686,6 +686,26 @@ static uint16_t nvme_identify_ctrl(NvmeCtrl *n, NvmeIdentify *c) prp1, prp2); } +static uint16_t nvme_identify_ctrl_csi(NvmeCtrl *n, NvmeIdentify *c) +{ + uint64_t prp1 = le64_to_cpu(c->prp1); + uint64_t prp2 = le64_to_cpu(c->prp2); + static const int data_len = NVME_IDENTIFY_DATA_SIZE; + uint32_t *list; + uint16_t ret; + + trace_pci_nvme_identify_ctrl_csi(c->csi); + + if (c->csi == NVME_CSI_NVM) { + list = g_malloc0(data_len); + ret = nvme_dma_read_prp(n, (uint8_t *)list, data_len, prp1, prp2); + g_free(list); + return ret; + } else { + return NVME_INVALID_FIELD | NVME_DNR; + } +} + static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeIdentify *c) { NvmeNamespace *ns; @@ -701,11 +721,42 @@ static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeIdentify *c) } ns = &n->namespaces[nsid - 1]; + assert(nsid == ns->nsid); return nvme_dma_read_prp(n, (uint8_t *)&ns->id_ns, sizeof(ns->id_ns), prp1, prp2); } +static uint16_t nvme_identify_ns_csi(NvmeCtrl *n, NvmeIdentify *c) +{ + NvmeNamespace *ns; + uint32_t nsid = le32_to_cpu(c->nsid); + uint64_t prp1 = le64_to_cpu(c->prp1); + uint64_t prp2 = le64_to_cpu(c->prp2); + static const int data_len = NVME_IDENTIFY_DATA_SIZE; + uint32_t *list; + uint16_t ret; + + trace_pci_nvme_identify_ns_csi(nsid, c->csi); + + if (unlikely(nsid == 0 || nsid > n->num_namespaces)) { + trace_pci_nvme_err_invalid_ns(nsid, n->num_namespaces); + return NVME_INVALID_NSID | NVME_DNR; + } + + ns = &n->namespaces[nsid - 1]; + assert(nsid == ns->nsid); + + if (c->csi == NVME_CSI_NVM) { + list = g_malloc0(data_len); + ret = nvme_dma_read_prp(n, (uint8_t *)list, data_len, prp1, prp2); + g_free(list); + return ret; + } else { + return NVME_INVALID_FIELD | NVME_DNR; + } +} + static uint16_t nvme_identify_nslist(NvmeCtrl *n, NvmeIdentify *c) { static const int data_len = NVME_IDENTIFY_DATA_SIZE; @@ -733,6 +784,99 @@ static uint16_t nvme_identify_nslist(NvmeCtrl *n, NvmeIdentify *c) return ret; } +static uint16_t nvme_identify_nslist_csi(NvmeCtrl *n, NvmeIdentify *c) +{ + static const int data_len = NVME_IDENTIFY_DATA_SIZE; + uint32_t min_nsid = le32_to_cpu(c->nsid); + uint64_t prp1 = le64_to_cpu(c->prp1); + uint64_t prp2 = le64_to_cpu(c->prp2); + uint32_t *list; + uint16_t ret; + int i, j = 0; + + trace_pci_nvme_identify_nslist_csi(min_nsid, c->csi); + + if (c->csi != NVME_CSI_NVM) { + return NVME_INVALID_FIELD | NVME_DNR; + } + + list = g_malloc0(data_len); + for (i = 0; i < n->num_namespaces; i++) { + if (i < min_nsid) { + continue; + } + list[j++] = cpu_to_le32(i + 1); + if (j == data_len / sizeof(uint32_t)) { + break; + } + } + ret = nvme_dma_read_prp(n, (uint8_t *)list, data_len, prp1, prp2); + g_free(list); + return ret; +} + +static uint16_t nvme_list_ns_descriptors(NvmeCtrl *n, NvmeIdentify *c) +{ + NvmeNamespace *ns; + uint32_t nsid = le32_to_cpu(c->nsid); + uint64_t prp1 = le64_to_cpu(c->prp1); + uint64_t prp2 = le64_to_cpu(c->prp2); + void *buf_ptr; + NvmeNsIdDesc *desc; + static const int data_len = NVME_IDENTIFY_DATA_SIZE; + uint8_t *buf; + uint16_t status; + + trace_pci_nvme_list_ns_descriptors(); + + if (unlikely(nsid == 0 || nsid > n->num_namespaces)) { + trace_pci_nvme_err_invalid_ns(nsid, n->num_namespaces); + return NVME_INVALID_NSID | NVME_DNR; + } + + ns = &n->namespaces[nsid - 1]; + assert(nsid == ns->nsid); + + buf = g_malloc0(data_len); + buf_ptr = buf; + + desc = buf_ptr; + desc->nidt = NVME_NIDT_UUID; + desc->nidl = NVME_NIDL_UUID; + buf_ptr += sizeof(*desc); + memcpy(buf_ptr, ns->uuid.data, NVME_NIDL_UUID); + buf_ptr += NVME_NIDL_UUID; + + desc = buf_ptr; + desc->nidt = NVME_NIDT_CSI; + desc->nidl = NVME_NIDL_CSI; + buf_ptr += sizeof(*desc); + *(uint8_t *)buf_ptr = NVME_CSI_NVM; + + status = nvme_dma_read_prp(n, buf, data_len, prp1, prp2); + g_free(buf); + return status; +} + +static uint16_t nvme_identify_cmd_set(NvmeCtrl *n, NvmeIdentify *c) +{ + uint64_t prp1 = le64_to_cpu(c->prp1); + uint64_t prp2 = le64_to_cpu(c->prp2); + static const int data_len = NVME_IDENTIFY_DATA_SIZE; + uint32_t *list; + uint8_t *ptr; + uint16_t status; + + trace_pci_nvme_identify_cmd_set(); + + list = g_malloc0(data_len); + ptr = (uint8_t *)list; + NVME_SET_CSI(*ptr, NVME_CSI_NVM); + status = nvme_dma_read_prp(n, (uint8_t *)list, data_len, prp1, prp2); + g_free(list); + return status; +} + static uint16_t nvme_identify(NvmeCtrl *n, NvmeCmd *cmd) { NvmeIdentify *c = (NvmeIdentify *)cmd; @@ -740,10 +884,20 @@ static uint16_t nvme_identify(NvmeCtrl *n, NvmeCmd *cmd) switch (le32_to_cpu(c->cns)) { case NVME_ID_CNS_NS: return nvme_identify_ns(n, c); + case NVME_ID_CNS_CS_NS: + return nvme_identify_ns_csi(n, c); case NVME_ID_CNS_CTRL: return nvme_identify_ctrl(n, c); + case NVME_ID_CNS_CS_CTRL: + return nvme_identify_ctrl_csi(n, c); case NVME_ID_CNS_NS_ACTIVE_LIST: return nvme_identify_nslist(n, c); + case NVME_ID_CNS_CS_NS_ACTIVE_LIST: + return nvme_identify_nslist_csi(n, c); + case NVME_ID_CNS_NS_DESC_LIST: + return nvme_list_ns_descriptors(n, c); + case NVME_ID_CNS_IO_COMMAND_SET: + return nvme_identify_cmd_set(n, c); default: trace_pci_nvme_err_invalid_identify_cns(le32_to_cpu(c->cns)); return NVME_INVALID_FIELD | NVME_DNR; @@ -818,6 +972,9 @@ static uint16_t nvme_get_feature(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) break; case NVME_TIMESTAMP: return nvme_get_feature_timestamp(n, cmd); + case NVME_COMMAND_SET_PROFILE: + result = 0; + break; default: trace_pci_nvme_err_invalid_getfeat(dw10); return NVME_INVALID_FIELD | NVME_DNR; @@ -864,6 +1021,15 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) break; case NVME_TIMESTAMP: return nvme_set_feature_timestamp(n, cmd); + break; + + case NVME_COMMAND_SET_PROFILE: + if (dw11 & 0x1ff) { + trace_pci_nvme_err_invalid_iocsci(dw11 & 0x1ff); + return NVME_CMD_SET_CMB_REJECTED | NVME_DNR; + } + break; + default: trace_pci_nvme_err_invalid_setfeat(dw10); return NVME_INVALID_FIELD | NVME_DNR; @@ -1149,6 +1315,29 @@ static void nvme_write_bar(NvmeCtrl *n, hwaddr offset, uint64_t data, break; case 0x14: /* CC */ trace_pci_nvme_mmio_cfg(data & 0xffffffff); + + if (NVME_CC_CSS(data) != NVME_CC_CSS(n->bar.cc)) { + if (NVME_CC_EN(n->bar.cc)) { + NVME_GUEST_ERR(pci_nvme_err_change_css_when_enabled, + "changing selected command set when enabled"); + break; + } + switch (NVME_CC_CSS(data)) { + case CSS_NVM_ONLY: + trace_pci_nvme_css_nvm_cset_selected_by_host(data & 0xffffffff); + break; + case CSS_ALL_NSTYPES: + NVME_SET_CC_CSS(n->bar.cc, CSS_ALL_NSTYPES); + trace_pci_nvme_css_all_csets_sel_by_host(data & 0xffffffff); + break; + case CSS_ADMIN_ONLY: + break; + default: + NVME_GUEST_ERR(pci_nvme_ub_unknown_css_value, + "unknown value in CC.CSS field"); + } + } + /* Windows first sends data, then sends enable bit */ if (!NVME_CC_EN(data) && !NVME_CC_EN(n->bar.cc) && !NVME_CC_SHN(data) && !NVME_CC_SHN(n->bar.cc)) @@ -1496,6 +1685,7 @@ static void nvme_init_namespace(NvmeCtrl *n, NvmeNamespace *ns, Error **errp) { int64_t bs_size; NvmeIdNs *id_ns = &ns->id_ns; + int lba_index; bs_size = blk_getlength(n->conf.blk); if (bs_size < 0) { @@ -1505,7 +1695,10 @@ static void nvme_init_namespace(NvmeCtrl *n, NvmeNamespace *ns, Error **errp) n->ns_size = bs_size; - id_ns->lbaf[0].ds = BDRV_SECTOR_BITS; + ns->csi = NVME_CSI_NVM; + qemu_uuid_generate(&ns->uuid); /* TODO make UUIDs persistent */ + lba_index = NVME_ID_NS_FLBAS_INDEX(ns->id_ns.flbas); + id_ns->lbaf[lba_index].ds = nvme_ilog2(n->conf.logical_block_size); id_ns->nsze = cpu_to_le64(nvme_ns_nlbas(n, ns)); /* no thin provisioning */ @@ -1616,7 +1809,7 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pci_dev) id->vid = cpu_to_le16(pci_get_word(pci_conf + PCI_VENDOR_ID)); id->ssvid = cpu_to_le16(pci_get_word(pci_conf + PCI_SUBSYSTEM_VENDOR_ID)); strpadcpy((char *)id->mn, sizeof(id->mn), "QEMU NVMe Ctrl", ' '); - strpadcpy((char *)id->fr, sizeof(id->fr), "1.0", ' '); + strpadcpy((char *)id->fr, sizeof(id->fr), "2.0", ' '); strpadcpy((char *)id->sn, sizeof(id->sn), n->params.serial, ' '); id->rab = 6; id->ieee[0] = 0x00; @@ -1640,7 +1833,11 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pci_dev) NVME_CAP_SET_MQES(n->bar.cap, 0x7ff); NVME_CAP_SET_CQR(n->bar.cap, 1); NVME_CAP_SET_TO(n->bar.cap, 0xf); - NVME_CAP_SET_CSS(n->bar.cap, 1); + /* + * The driver now always supports NS Types, but all commands that + * support CSI field will only handle NVM Command Set. + */ + NVME_CAP_SET_CSS(n->bar.cap, (CAP_CSS_NVM | CAP_CSS_CSI_SUPP)); NVME_CAP_SET_MPSMAX(n->bar.cap, 4); n->bar.vs = 0x00010200; @@ -1650,6 +1847,7 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pci_dev) static void nvme_realize(PCIDevice *pci_dev, Error **errp) { NvmeCtrl *n = NVME(pci_dev); + NvmeNamespace *ns; Error *local_err = NULL; int i; @@ -1675,8 +1873,10 @@ static void nvme_realize(PCIDevice *pci_dev, Error **errp) nvme_init_ctrl(n, pci_dev); - for (i = 0; i < n->num_namespaces; i++) { - nvme_init_namespace(n, &n->namespaces[i], &local_err); + ns = n->namespaces; + for (i = 0; i < n->num_namespaces; i++, ns++) { + ns->nsid = i + 1; + nvme_init_namespace(n, ns, &local_err); if (local_err) { error_propagate(errp, local_err); return; diff --git a/hw/block/nvme.h b/hw/block/nvme.h index 4fd155c409..0d29f75475 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -121,4 +121,15 @@ static inline uint64_t nvme_ns_nlbas(NvmeCtrl *n, NvmeNamespace *ns) return n->ns_size >> nvme_ns_lbads(ns); } +static inline int nvme_ilog2(uint64_t i) +{ + int log = -1; + + while (i) { + i >>= 1; + log++; + } + return log; +} + #endif /* HW_NVME_H */ From patchwork Wed Jun 17 21:34:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Fomichev X-Patchwork-Id: 11610751 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DBB2113B1 for ; Wed, 17 Jun 2020 21:51:36 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A247E206E2 for ; Wed, 17 Jun 2020 21:51:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="AEJtQHE1" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A247E206E2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:45308 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jlfy3-0008VM-Ig for patchwork-qemu-devel@patchwork.kernel.org; Wed, 17 Jun 2020 17:51:35 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46046) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfhn-0003yo-56; Wed, 17 Jun 2020 17:34:47 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:29866) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfhk-0005I4-Hp; Wed, 17 Jun 2020 17:34:46 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1592429684; x=1623965684; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mfvu2BOfPhIWNMwtlJ7u/QgB0k77cqfsi8k4+wrXG/Q=; b=AEJtQHE1bjJmxnbnnGaeDnL0YOmx4xxsC82iKulnSIxOsG2Onn9wDTJ2 Lf0dObMj/d/hFoC9AApWigSwo5fO5k/hQAhXJBQhDgTauofr9bSU5cefW wXcf6ANEh/7HinnVB6TYdV+ldqtglGOXW0dygE2d2PdmakwA0ESuEXJCI YMFPLRLeHw/R8ij7NJ8jLb9wgjoz9SKAFUAJzZC2gj3FKUkAwkEDOt3oD 4WCmU+nHUCXULnCn0e/E9fXp6huIjKO7QwjMILU5SlEm0JknhqsZjVOd0 7uAiSfpZPX+ChgXo8k7/8wZ36UgJJmM3+OPWrWnynY8ceEkKkiJhrswHZ g==; IronPort-SDR: kW8ZkgUbNVZCJKsPmU3HU0gsawwu/CoVVawJVebFTjct1aLe2r8czGThnAedf+JhqDptmRHIl0 OrTfh8Hf4lIIvhvf4GL6idLwBYDOySuioWdTp/KtN6v7dMBJIzSDVL86tmW+LMOo2hz/PF4p2k lIokae5K5GVFDHV2s+MkRQ4XAwBjY7IXOI4sGIJBvmmbFvuSTczU4h5nl1h2Y9hp+5Aw8i9zdP PSJi0O2LH+gvIH4sI8EQAmidwJ5p+U6gElAmghWjxNIOSW4w+65FWaUxaA618hUuiTt/d3pyD0 e8c= X-IronPort-AV: E=Sophos;i="5.73,523,1583164800"; d="scan'208";a="249439812" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 18 Jun 2020 05:34:42 +0800 IronPort-SDR: CB1CzTIBETE5NftVuY8l0JxEuME9/a5TApmNrj/i1iMBXA/Cb6HZW1OtQG9HxRvnPQ5bPoBlBZ lsojMOknb0LHLI2erZEXoQTctZB/pcJnU= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 14:23:23 -0700 IronPort-SDR: ZfCNF/7cGvn6FBErpIhZxKq498Nmj7g5L/zN8zJ6gAtsyIIEqUOUHY3zr7x1w5MvGiw90mWa6k JpEdXbqSwCEg== WDCIronportException: Internal Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 17 Jun 2020 14:34:41 -0700 From: Dmitry Fomichev To: Kevin Wolf , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Maxim Levitsky Subject: [PATCH v2 08/18] hw/block/nvme: Make Zoned NS Command Set definitions Date: Thu, 18 Jun 2020 06:34:05 +0900 Message-Id: <20200617213415.22417-9-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200617213415.22417-1-dmitry.fomichev@wdc.com> References: <20200617213415.22417-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=430b82a1d=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/17 17:34:28 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Matias Bjorling Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Define values and structures that are needed to support Zoned Namespace Command Set (NVMe TP 4053) in PCI NVMe controller emulator. All new protocol definitions are located in include/block/nvme.h and everything added that is specific to this implementation is kept in hw/block/nvme.h. In order to improve scalability, all open, closed and full zones are organized in separate linked lists. Consequently, almost all zone operations don't require scanning of the entire zone array (which potentially can be quite large) - it is only necessary to enumerate one or more zone lists. Zone lists are designed to be position-independent as they can be persisted to the backing file as a part of zone metadata. NvmeZoneList struct defined in this patch serves as a head of every zone list. NvmeZone structure encapsulates NvmeZoneDescriptor defined in Zoned Command Set specification and adds a few more fields that are internal to this implementation. Signed-off-by: Niklas Cassel Signed-off-by: Hans Holmberg Signed-off-by: Ajay Joshi Signed-off-by: Matias Bjorling Signed-off-by: Shin'ichiro Kawasaki Signed-off-by: Alexey Bogoslavsky Signed-off-by: Dmitry Fomichev Acked-by: Alistair Francis --- hw/block/nvme.h | 130 +++++++++++++++++++++++++++++++++++++++++++ include/block/nvme.h | 119 ++++++++++++++++++++++++++++++++++++++- 2 files changed, 248 insertions(+), 1 deletion(-) diff --git a/hw/block/nvme.h b/hw/block/nvme.h index 0d29f75475..2c932b5e29 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -3,12 +3,22 @@ #include "block/nvme.h" +#define NVME_DEFAULT_ZONE_SIZE 128 /* MiB */ +#define NVME_DEFAULT_MAX_ZA_SIZE 128 /* KiB */ + typedef struct NvmeParams { char *serial; uint32_t num_queues; /* deprecated since 5.1 */ uint32_t max_ioqpairs; uint16_t msix_qsize; uint32_t cmb_size_mb; + + bool zoned; + bool cross_zone_read; + uint8_t fill_pattern; + uint32_t zamds_bs; + uint64_t zone_size; + uint64_t zone_capacity; } NvmeParams; typedef struct NvmeAsyncEvent { @@ -17,6 +27,8 @@ typedef struct NvmeAsyncEvent { enum NvmeRequestFlags { NVME_REQ_FLG_HAS_SG = 1 << 0, + NVME_REQ_FLG_FILL = 1 << 1, + NVME_REQ_FLG_APPEND = 1 << 2, }; typedef struct NvmeRequest { @@ -24,6 +36,7 @@ typedef struct NvmeRequest { BlockAIOCB *aiocb; uint16_t status; uint16_t flags; + uint64_t fill_ofs; NvmeCqe cqe; BlockAcctCookie acct; QEMUSGList qsg; @@ -61,11 +74,35 @@ typedef struct NvmeCQueue { QTAILQ_HEAD(, NvmeRequest) req_list; } NvmeCQueue; +typedef struct NvmeZone { + NvmeZoneDescr d; + uint64_t tstamp; + uint32_t next; + uint32_t prev; + uint8_t rsvd80[8]; +} NvmeZone; + +#define NVME_ZONE_LIST_NIL UINT_MAX + +typedef struct NvmeZoneList { + uint32_t head; + uint32_t tail; + uint32_t size; + uint8_t rsvd12[4]; +} NvmeZoneList; + typedef struct NvmeNamespace { NvmeIdNs id_ns; uint32_t nsid; uint8_t csi; QemuUUID uuid; + + NvmeIdNsZoned *id_ns_zoned; + NvmeZone *zone_array; + NvmeZoneList *exp_open_zones; + NvmeZoneList *imp_open_zones; + NvmeZoneList *closed_zones; + NvmeZoneList *full_zones; } NvmeNamespace; static inline NvmeLBAF *nvme_ns_lbaf(NvmeNamespace *ns) @@ -100,6 +137,7 @@ typedef struct NvmeCtrl { uint32_t num_namespaces; uint32_t max_q_ents; uint64_t ns_size; + uint8_t *cmbuf; uint32_t irq_status; uint64_t host_timestamp; /* Timestamp sent by the host */ @@ -107,6 +145,12 @@ typedef struct NvmeCtrl { HostMemoryBackend *pmrdev; + int zone_file_fd; + uint32_t num_zones; + uint64_t zone_size_bs; + uint64_t zone_array_size; + uint8_t zamds; + NvmeNamespace *namespaces; NvmeSQueue **sq; NvmeCQueue **cq; @@ -121,6 +165,86 @@ static inline uint64_t nvme_ns_nlbas(NvmeCtrl *n, NvmeNamespace *ns) return n->ns_size >> nvme_ns_lbads(ns); } +static inline uint8_t nvme_get_zone_state(NvmeZone *zone) +{ + return zone->d.zs >> 4; +} + +static inline void nvme_set_zone_state(NvmeZone *zone, enum NvmeZoneState state) +{ + zone->d.zs = state << 4; +} + +static inline uint64_t nvme_zone_rd_boundary(NvmeCtrl *n, NvmeZone *zone) +{ + return zone->d.zslba + n->params.zone_size; +} + +static inline uint64_t nvme_zone_wr_boundary(NvmeZone *zone) +{ + return zone->d.zslba + zone->d.zcap; +} + +static inline bool nvme_wp_is_valid(NvmeZone *zone) +{ + uint8_t st = nvme_get_zone_state(zone); + + return st != NVME_ZONE_STATE_FULL && + st != NVME_ZONE_STATE_READ_ONLY && + st != NVME_ZONE_STATE_OFFLINE; +} + +/* + * Initialize a zone list head. + */ +static inline void nvme_init_zone_list(NvmeZoneList *zl) +{ + zl->head = NVME_ZONE_LIST_NIL; + zl->tail = NVME_ZONE_LIST_NIL; + zl->size = 0; +} + +/* + * Initialize the number of entries contained in a zone list. + */ +static inline uint32_t nvme_zone_list_size(NvmeZoneList *zl) +{ + return zl->size; +} + +/* + * Check if the zone is not currently included into any zone list. + */ +static inline bool nvme_zone_not_in_list(NvmeZone *zone) +{ + return (bool)(zone->prev == 0 && zone->next == 0); +} + +/* + * Return the zone at the head of zone list or NULL if the list is empty. + */ +static inline NvmeZone *nvme_peek_zone_head(NvmeNamespace *ns, NvmeZoneList *zl) +{ + if (zl->head == NVME_ZONE_LIST_NIL) { + return NULL; + } + return &ns->zone_array[zl->head]; +} + +/* + * Return the next zone in the list. + */ +static inline NvmeZone *nvme_next_zone_in_list(NvmeNamespace *ns, NvmeZone *z, + NvmeZoneList *zl) +{ + assert(!nvme_zone_not_in_list(z)); + + if (z->next == NVME_ZONE_LIST_NIL) { + return NULL; + } + return &ns->zone_array[z->next]; +} + static inline int nvme_ilog2(uint64_t i) { int log = -1; @@ -132,4 +256,10 @@ static inline int nvme_ilog2(uint64_t i) return log; } +static inline void _hw_nvme_check_size(void) +{ + QEMU_BUILD_BUG_ON(sizeof(NvmeZoneList) != 16); + QEMU_BUILD_BUG_ON(sizeof(NvmeZone) != 88); +} + #endif /* HW_NVME_H */ diff --git a/include/block/nvme.h b/include/block/nvme.h index 5a1e5e137c..596c39162b 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -446,6 +446,9 @@ enum NvmeIoCommands { NVME_CMD_COMPARE = 0x05, NVME_CMD_WRITE_ZEROS = 0x08, NVME_CMD_DSM = 0x09, + NVME_CMD_ZONE_MGMT_SEND = 0x79, + NVME_CMD_ZONE_MGMT_RECV = 0x7a, + NVME_CMD_ZONE_APND = 0x7d, }; typedef struct NvmeDeleteQ { @@ -539,6 +542,7 @@ enum NvmeNidLength { enum NvmeCsi { NVME_CSI_NVM = 0x00, + NVME_CSI_ZONED = 0x02, }; #define NVME_SET_CSI(vec, csi) (vec |= (uint8_t)(1 << (csi))) @@ -661,6 +665,7 @@ enum NvmeStatusCodes { NVME_INVALID_NSID = 0x000b, NVME_CMD_SEQ_ERROR = 0x000c, NVME_CMD_SET_CMB_REJECTED = 0x002b, + NVME_INVALID_CMD_SET = 0x002c, NVME_LBA_RANGE = 0x0080, NVME_CAP_EXCEEDED = 0x0081, NVME_NS_NOT_READY = 0x0082, @@ -684,6 +689,14 @@ enum NvmeStatusCodes { NVME_CONFLICTING_ATTRS = 0x0180, NVME_INVALID_PROT_INFO = 0x0181, NVME_WRITE_TO_RO = 0x0182, + NVME_ZONE_BOUNDARY_ERROR = 0x01b8, + NVME_ZONE_FULL = 0x01b9, + NVME_ZONE_READ_ONLY = 0x01ba, + NVME_ZONE_OFFLINE = 0x01bb, + NVME_ZONE_INVALID_WRITE = 0x01bc, + NVME_ZONE_TOO_MANY_ACTIVE = 0x01bd, + NVME_ZONE_TOO_MANY_OPEN = 0x01be, + NVME_ZONE_INVAL_TRANSITION = 0x01bf, NVME_WRITE_FAULT = 0x0280, NVME_UNRECOVERED_READ = 0x0281, NVME_E2E_GUARD_ERROR = 0x0282, @@ -807,7 +820,17 @@ typedef struct NvmeIdCtrl { uint8_t ieee[3]; uint8_t cmic; uint8_t mdts; - uint8_t rsvd255[178]; + uint16_t cntlid; + uint32_t ver; + uint32_t rtd3r; + uint32_t rtd3e; + uint32_t oaes; + uint32_t ctratt; + uint8_t rsvd100[28]; + uint16_t crdt1; + uint16_t crdt2; + uint16_t crdt3; + uint8_t rsvd134[122]; uint16_t oacs; uint8_t acl; uint8_t aerl; @@ -832,6 +855,11 @@ typedef struct NvmeIdCtrl { uint8_t vs[1024]; } NvmeIdCtrl; +typedef struct NvmeIdCtrlZoned { + uint8_t zamds; + uint8_t rsvd1[4095]; +} NvmeIdCtrlZoned; + enum NvmeIdCtrlOacs { NVME_OACS_SECURITY = 1 << 0, NVME_OACS_FORMAT = 1 << 1, @@ -908,6 +936,12 @@ typedef struct NvmeLBAF { uint8_t rp; } NvmeLBAF; +typedef struct NvmeLBAFE { + uint64_t zsze; + uint8_t zdes; + uint8_t rsvd9[7]; +} NvmeLBAFE; + typedef struct NvmeIdNs { uint64_t nsze; uint64_t ncap; @@ -930,6 +964,19 @@ typedef struct NvmeIdNs { uint8_t vs[3712]; } NvmeIdNs; +typedef struct NvmeIdNsZoned { + uint16_t zoc; + uint16_t ozcs; + uint32_t mar; + uint32_t mor; + uint32_t rrl; + uint32_t frl; + uint8_t rsvd20[2796]; + NvmeLBAFE lbafe[16]; + uint8_t rsvd3072[768]; + uint8_t vs[256]; +} NvmeIdNsZoned; + /*Deallocate Logical Block Features*/ #define NVME_ID_NS_DLFEAT_GUARD_CRC(dlfeat) ((dlfeat) & 0x10) @@ -962,6 +1009,71 @@ enum NvmeIdNsDps { DPS_FIRST_EIGHT = 8, }; +enum NvmeZoneAttr { + NVME_ZA_FINISHED_BY_CTLR = 1 << 0, + NVME_ZA_FINISH_RECOMMENDED = 1 << 1, + NVME_ZA_RESET_RECOMMENDED = 1 << 2, + NVME_ZA_ZD_EXT_VALID = 1 << 7, +}; + +typedef struct NvmeZoneReportHeader { + uint64_t nr_zones; + uint8_t rsvd[56]; +} NvmeZoneReportHeader; + +enum NvmeZoneReceiveAction { + NVME_ZONE_REPORT = 0, + NVME_ZONE_REPORT_EXTENDED = 1, +}; + +enum NvmeZoneReportType { + NVME_ZONE_REPORT_ALL = 0, + NVME_ZONE_REPORT_EMPTY = 1, + NVME_ZONE_REPORT_IMPLICITLY_OPEN = 2, + NVME_ZONE_REPORT_EXPLICITLY_OPEN = 3, + NVME_ZONE_REPORT_CLOSED = 4, + NVME_ZONE_REPORT_FULL = 5, + NVME_ZONE_REPORT_READ_ONLY = 6, + NVME_ZONE_REPORT_OFFLINE = 7, +}; + +typedef struct NvmeZoneDescr { + uint8_t zt; + uint8_t zs; + uint8_t za; + uint8_t rsvd3[5]; + uint64_t zcap; + uint64_t zslba; + uint64_t wp; + uint8_t rsvd32[32]; +} NvmeZoneDescr; + +enum NvmeZoneState { + NVME_ZONE_STATE_RESERVED = 0x00, + NVME_ZONE_STATE_EMPTY = 0x01, + NVME_ZONE_STATE_IMPLICITLY_OPEN = 0x02, + NVME_ZONE_STATE_EXPLICITLY_OPEN = 0x03, + NVME_ZONE_STATE_CLOSED = 0x04, + NVME_ZONE_STATE_READ_ONLY = 0x0D, + NVME_ZONE_STATE_FULL = 0x0E, + NVME_ZONE_STATE_OFFLINE = 0x0F, +}; + +enum NvmeZoneType { + NVME_ZONE_TYPE_RESERVED = 0x00, + NVME_ZONE_TYPE_SEQ_WRITE = 0x02, +}; + +enum NvmeZoneSendAction { + NVME_ZONE_ACTION_RSD = 0x00, + NVME_ZONE_ACTION_CLOSE = 0x01, + NVME_ZONE_ACTION_FINISH = 0x02, + NVME_ZONE_ACTION_OPEN = 0x03, + NVME_ZONE_ACTION_RESET = 0x04, + NVME_ZONE_ACTION_OFFLINE = 0x05, + NVME_ZONE_ACTION_SET_ZD_EXT = 0x10, +}; + static inline void _nvme_check_size(void) { QEMU_BUILD_BUG_ON(sizeof(NvmeCqe) != 16); @@ -978,8 +1090,13 @@ static inline void _nvme_check_size(void) QEMU_BUILD_BUG_ON(sizeof(NvmeFwSlotInfoLog) != 512); QEMU_BUILD_BUG_ON(sizeof(NvmeSmartLog) != 512); QEMU_BUILD_BUG_ON(sizeof(NvmeIdCtrl) != 4096); + QEMU_BUILD_BUG_ON(sizeof(NvmeIdCtrlZoned) != 4096); QEMU_BUILD_BUG_ON(sizeof(NvmeNsIdDesc) != 4); + QEMU_BUILD_BUG_ON(sizeof(NvmeLBAF) != 4); + QEMU_BUILD_BUG_ON(sizeof(NvmeLBAFE) != 16); QEMU_BUILD_BUG_ON(sizeof(NvmeIdNs) != 4096); + QEMU_BUILD_BUG_ON(sizeof(NvmeIdNsZoned) != 4096); QEMU_BUILD_BUG_ON(sizeof(NvmeEffectsLog) != 4096); + QEMU_BUILD_BUG_ON(sizeof(NvmeZoneDescr) != 64); } #endif From patchwork Wed Jun 17 21:34:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Fomichev X-Patchwork-Id: 11610761 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0D11990 for ; Wed, 17 Jun 2020 21:53:33 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D81872168B for ; Wed, 17 Jun 2020 21:53:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="BwAS1vii" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D81872168B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:53994 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jlfzv-0004F2-WF for patchwork-qemu-devel@patchwork.kernel.org; Wed, 17 Jun 2020 17:53:32 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46048) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfhn-00040f-Sf; Wed, 17 Jun 2020 17:34:47 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:29831) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfhl-0005By-Ll; Wed, 17 Jun 2020 17:34:47 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1592429685; x=1623965685; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=BZ+3gipOirolS73v2XMIMCSYQ2dSVytLFZoszvCeaRQ=; b=BwAS1vii/OnYK+86E4LqOi6FeLIg5CjHdj8oTitRKk9uywjygM6DWU19 GYO6p8XQZVNYTcHS40u+BRjIsa2yB0Pk4xFnCDrz2IqL9sj7ze3PzUn4b YyLk4GDB2o2nfHI1Gc4Le42tK46Vh+ucGg6gY/Fqc1PqjHYC5mHfy7kzh wj2vaO31aDmwkP0COQ1TzOVNIcXdUxbeuP1z+XvSrp2Wc9tTxSFk6vuZj 3IhO3S6nxRPFDU6ObBDCfYx54an9TjrmM+H8rtnv5+gZ7ZH8N8Ax02T1a gdC2yv8r2n2AONTMVI++I0bl80RtnZsLp3SZjJUum/rQHdOYnzbEVvbvg g==; IronPort-SDR: HKwBe78h8tksscBh9Lry8aGPOKorMLzaBFSjWY1shnNRMqqmpukKq7eD5Q6yn5C/PgAt357rGc eKk+FcuSdePyup0My5O44JbRDMr0d7Skj8ciJbHguMZ3wyR3sh8nATJnj927VUGQB3b9jiif6N lfKky7xtHmCdkwIWgt/tig/zoaWh9HakIw1WdxlGgtUIVrD/eC/w9LOS6UtPwni8N/vUyPNyrT kdqNEFYQs36azUmuSpuH2k9vt0ttHxo4rfZGB/ON0EAiblc1MpLXcbqFQ3R58f09cQIwsQGtRh ibU= X-IronPort-AV: E=Sophos;i="5.73,523,1583164800"; d="scan'208";a="249439816" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 18 Jun 2020 05:34:44 +0800 IronPort-SDR: zdp7TT6B2eLImPPgdOW/uatR0X4Gvx8802l26RaBhkQHZAi6qUaHHo98tkhY7YcyV+YkswtXVH 6xCwX1uhIASAltjvxxSnF/zGoDlKI831U= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 14:23:25 -0700 IronPort-SDR: JuBd73Ub0h0mcS7HBFTYRAetotmyWyeiGgSc7pLZMC/UH13SDBsEYlFftu1/z+62y9D3N0RPPK Gmw56aGLZtVg== WDCIronportException: Internal Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 17 Jun 2020 14:34:42 -0700 From: Dmitry Fomichev To: Kevin Wolf , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Maxim Levitsky Subject: [PATCH v2 09/18] hw/block/nvme: Define Zoned NS Command Set trace events Date: Thu, 18 Jun 2020 06:34:06 +0900 Message-Id: <20200617213415.22417-10-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200617213415.22417-1-dmitry.fomichev@wdc.com> References: <20200617213415.22417-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=430b82a1d=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/17 17:34:28 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Matias Bjorling Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" The Zoned Namespace Command Set / Namespace Types implementation that is being introduced in this series adds a good number of trace events. Combine all tracepoint definitions into a separate patch to make reviewing more convenient. Signed-off-by: Dmitry Fomichev --- hw/block/trace-events | 41 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 41 insertions(+) diff --git a/hw/block/trace-events b/hw/block/trace-events index 3f3323fe38..984db8a20c 100644 --- a/hw/block/trace-events +++ b/hw/block/trace-events @@ -66,6 +66,31 @@ pci_nvme_mmio_shutdown_cleared(void) "shutdown bit cleared" pci_nvme_cmd_supp_and_effects_log_read(void) "commands supported and effects log read" pci_nvme_css_nvm_cset_selected_by_host(uint32_t cc) "NVM command set selected by host, bar.cc=0x%"PRIx32"" pci_nvme_css_all_csets_sel_by_host(uint32_t cc) "all supported command sets selected by host, bar.cc=0x%"PRIx32"" +pci_nvme_open_zone(uint64_t slba, uint32_t zone_idx, int all) "open zone, slba=%"PRIu64", idx=%"PRIu32", all=%"PRIi32"" +pci_nvme_close_zone(uint64_t slba, uint32_t zone_idx, int all) "close zone, slba=%"PRIu64", idx=%"PRIu32", all=%"PRIi32"" +pci_nvme_finish_zone(uint64_t slba, uint32_t zone_idx, int all) "finish zone, slba=%"PRIu64", idx=%"PRIu32", all=%"PRIi32"" +pci_nvme_reset_zone(uint64_t slba, uint32_t zone_idx, int all) "reset zone, slba=%"PRIu64", idx=%"PRIu32", all=%"PRIi32"" +pci_nvme_offline_zone(uint64_t slba, uint32_t zone_idx, int all) "offline zone, slba=%"PRIu64", idx=%"PRIu32", all=%"PRIi32"" +pci_nvme_set_descriptor_extension(uint64_t slba, uint32_t zone_idx) "set zone descriptor extension, slba=%"PRIu64", idx=%"PRIu32"" +pci_nvme_zone_reset_recommended(uint64_t slba) "slba=%"PRIu64"" +pci_nvme_zone_reset_internal_op(uint64_t slba) "slba=%"PRIu64"" +pci_nvme_zone_finish_recommended(uint64_t slba) "slba=%"PRIu64"" +pci_nvme_zone_finish_internal_op(uint64_t slba) "slba=%"PRIu64"" +pci_nvme_zone_finished_by_controller(uint64_t slba) "slba=%"PRIu64"" +pci_nvme_zd_extension_set(uint32_t zone_idx) "set descriptor extension for zone_idx=%"PRIu32"" +pci_nvme_power_on_close(uint32_t state, uint64_t slba) "zone state=%"PRIu32", slba=%"PRIu64" transitioned to Closed state" +pci_nvme_power_on_reset(uint32_t state, uint64_t slba) "zone state=%"PRIu32", slba=%"PRIu64" transitioned to Empty state" +pci_nvme_power_on_full(uint32_t state, uint64_t slba) "zone state=%"PRIu32", slba=%"PRIu64" transitioned to Full state" +pci_nvme_zone_ae_not_enabled(int info, int log_page, int nsid) "zone async event not enabled, info=0x%"PRIx32", lp=0x%"PRIx32", nsid=%"PRIu32"" +pci_nvme_zone_ae_not_cleared(int info, int log_page, int nsid) "zoned async event not cleared, info=0x%"PRIx32", lp=0x%"PRIx32", nsid=%"PRIu32"" +pci_nvme_zone_aen_not_requested(uint32_t oaes) "zone descriptor AEN are not requested by host, oaes=0x%"PRIx32"" +pci_nvme_getfeat_aen_cfg(uint64_t res) "reporting async event config res=%"PRIu64"" +pci_nvme_setfeat_zone_info_aer_on(void) "zone info change notices enabled" +pci_nvme_setfeat_zone_info_aer_off(void) "zone info change notices disabled" +pci_nvme_changed_zone_log_read(uint16_t nsid) "changed zone list log of ns %"PRIu16"" +pci_nvme_reporting_changed_zone(uint64_t zslba, uint8_t za) "zslba=%"PRIu64", attr=0x%"PRIx8"" +pci_nvme_empty_changed_zone_list(void) "no changes zones to report" +pci_nvme_mapped_zone_file(char *zfile_name, int ret) "mapped zone file %s, error %d" # nvme traces for error conditions pci_nvme_err_invalid_dma(void) "PRP/SGL is too small for transfer size" @@ -77,10 +102,25 @@ pci_nvme_err_invalid_ns(uint32_t ns, uint32_t limit) "invalid namespace %u not w pci_nvme_err_invalid_opc(uint8_t opc) "invalid opcode 0x%"PRIx8"" pci_nvme_err_invalid_admin_opc(uint8_t opc) "invalid admin opcode 0x%"PRIx8"" pci_nvme_err_invalid_lba_range(uint64_t start, uint64_t len, uint64_t limit) "Invalid LBA start=%"PRIu64" len=%"PRIu64" limit=%"PRIu64"" +pci_nvme_err_capacity_exceeded(uint64_t zone_id, uint64_t nr_zones) "zone capacity exceeded, zone_id=%"PRIu64", nr_zones=%"PRIu64"" +pci_nvme_err_unaligned_zone_cmd(uint8_t action, uint64_t slba, uint64_t zslba) "unaligned zone op 0x%"PRIx32", got slba=%"PRIu64", zslba=%"PRIu64"" +pci_nvme_err_invalid_zone_state_transition(uint8_t state, uint8_t action, uint64_t slba, uint8_t attrs) "0x%"PRIx32"->0x%"PRIx32", slba=%"PRIu64", attrs=0x%"PRIx32"" +pci_nvme_err_write_not_at_wp(uint64_t slba, uint64_t zone, uint64_t wp) "writing at slba=%"PRIu64", zone=%"PRIu64", but wp=%"PRIu64"" +pci_nvme_err_append_not_at_start(uint64_t slba, uint64_t zone) "appending at slba=%"PRIu64", but zone=%"PRIu64"" +pci_nvme_err_zone_write_not_ok(uint64_t slba, uint32_t nlb, uint32_t status) "slba=%"PRIu64", nlb=%"PRIu32", status=0x%"PRIx16"" +pci_nvme_err_zone_read_not_ok(uint64_t slba, uint32_t nlb, uint32_t status) "slba=%"PRIu64", nlb=%"PRIu32", status=0x%"PRIx16"" +pci_nvme_err_append_too_large(uint64_t slba, uint32_t nlb, uint8_t zamds) "slba=%"PRIu64", nlb=%"PRIu32", zamds=%"PRIu8"" +pci_nvme_err_insuff_active_res(uint32_t max_active) "max_active=%"PRIu32" zone limit exceeded" +pci_nvme_err_insuff_open_res(uint32_t max_open) "max_open=%"PRIu32" zone limit exceeded" +pci_nvme_err_zone_file_invalid(int error) "validation error=%"PRIi32"" +pci_nvme_err_zd_extension_map_error(uint32_t zone_idx) "can't map descriptor extension for zone_idx=%"PRIu32"" +pci_nvme_err_invalid_changed_zone_list_offset(uint64_t ofs) "changed zone list log offset must be 0, got %"PRIu64"" +pci_nvme_err_invalid_changed_zone_list_len(uint32_t len) "changed zone list log size is 4096, got %"PRIu32"" pci_nvme_err_invalid_effects_log_offset(uint64_t ofs) "commands supported and effects log offset must be 0, got %"PRIu64"" pci_nvme_err_invalid_effects_log_len(uint32_t len) "commands supported and effects log size is 4096, got %"PRIu32"" pci_nvme_err_change_css_when_enabled(void) "changing CC.CSS while controller is enabled" pci_nvme_err_only_nvm_cmd_set_avail(void) "setting 110b CC.CSS, but only NVM command set is enabled" +pci_nvme_err_only_zoned_cmd_set_avail(void) "setting 001b CC.CSS, but only ZONED+NVM command set is enabled" pci_nvme_err_invalid_iocsci(uint32_t idx) "unsupported command set combination index %"PRIu32"" pci_nvme_err_invalid_del_sq(uint16_t qid) "invalid submission queue deletion, sid=%"PRIu16"" pci_nvme_err_invalid_create_sq_cqid(uint16_t cqid) "failed creating submission queue, invalid cqid=%"PRIu16"" @@ -113,6 +153,7 @@ pci_nvme_err_startfail_sqent_too_large(uint8_t log2ps, uint8_t maxlog2ps) "nvme_ pci_nvme_err_startfail_asqent_sz_zero(void) "nvme_start_ctrl failed because the admin submission queue size is zero" pci_nvme_err_startfail_acqent_sz_zero(void) "nvme_start_ctrl failed because the admin completion queue size is zero" pci_nvme_err_startfail(void) "setting controller enable bit failed" +pci_nvme_err_invalid_mgmt_action(int action) "action=0x%"PRIx32"" # Traces for undefined behavior pci_nvme_ub_mmiowr_misaligned32(uint64_t offset) "MMIO write not 32-bit aligned, offset=0x%"PRIx64"" From patchwork Wed Jun 17 21:34:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Fomichev X-Patchwork-Id: 11610769 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 824A013B1 for ; Wed, 17 Jun 2020 21:55:21 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 394F621556 for ; Wed, 17 Jun 2020 21:55:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="UAsytfCr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 394F621556 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:34546 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jlg1g-0007mF-Cs for patchwork-qemu-devel@patchwork.kernel.org; Wed, 17 Jun 2020 17:55:20 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46080) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfhs-00049c-5F; Wed, 17 Jun 2020 17:34:52 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:29866) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfhn-0005I4-St; Wed, 17 Jun 2020 17:34:51 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1592429687; x=1623965687; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ycWME21cg/UsS3ueUBXs47m9g6qvqYibhaYlF+XdwsA=; b=UAsytfCrdFlfMlcdEYAEeq2yuOxfIMffcpD7/6OV0jOtko8zXPnSe6IW zdgKCH9+NdtS1HUArIxQ2+NFknNImi5h1kRtY2Mt+DV0qvbaIALDUgsm3 EfXBrcJ0hgX8EqbUWTBh/Gg0Jb9yW/8Luh7eOLtlR3uz8WG3++tfQSDOe IfLBwalqLP/Zq3LAAJKNH/1mQQjt5PomaCsIV0CyqVyrYBOgEmVlOLiYc 6b6Hb05klRHuCvnTsjbkU8MAsczpGqCBPDgh+8+eflGH7yFjSRrHt5RmV H06VV4m5lU9vGFAXvTt/04LS3dASu89R5ZAl45S1ed3NTY2J+wOsAMMyb w==; IronPort-SDR: LT780K21qYri/WmXzvJeX0Hj0Rp68+db3KSGSK6JYVwsEAFXNx+8UEY7gbe8FtyW22trL3t6YE Znk06csHjyiV+1pTT/wRfwsbUQ3f3AtjYMPe7eFS6V6c/mQPQSCWbfK1f3yTzZt0ON1iYZasIS NAxV7ef+JTQawKsRxtnc71tkBzNwsdcYEoUsAT8wcJwd5ExRwHpE9w/AWxDY5zUFsztAlAGLy4 Xju5o0JWMfZp3BZIeWR2wH/mdIr4rNTuDrGD9/iwaSltrg7WxWago3WDj+43yE1rGPYJlF9sFp C4U= X-IronPort-AV: E=Sophos;i="5.73,523,1583164800"; d="scan'208";a="249439819" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 18 Jun 2020 05:34:46 +0800 IronPort-SDR: 3VoE/At++K+S7VEkASNWmF+5Gd2/9L4YHb6I97lhDkCU0KCZBfLuUYaZ5D1/hMrIst0lzqGT8Y +bEki08FtXvxQXyiSJiLstGR2gzUVdoaA= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 14:23:27 -0700 IronPort-SDR: xYAvtn1qrPlx4hgy3m93XkB+Ltb0MVUhSCPeiLWhkHyC/mF8MpE50v/ZdiXmfRzi287sD4jREv sQSAokFZTKPA== WDCIronportException: Internal Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 17 Jun 2020 14:34:44 -0700 From: Dmitry Fomichev To: Kevin Wolf , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Maxim Levitsky Subject: [PATCH v2 10/18] hw/block/nvme: Support Zoned Namespace Command Set Date: Thu, 18 Jun 2020 06:34:07 +0900 Message-Id: <20200617213415.22417-11-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200617213415.22417-1-dmitry.fomichev@wdc.com> References: <20200617213415.22417-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=430b82a1d=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/17 17:34:28 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Matias Bjorling Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" The driver has been changed to advertise NVM Command Set when "zoned" driver property is not set (default) and Zoned Namespace Command Set otherwise. Handlers for three new NVMe commands introduced in Zoned Namespace Command Set specification are added, namely for Zone Management Receive, Zone Management Send and Zone Append. Driver initialization code has been extended to create a proper configuration for zoned operation using driver properties. Read/Write command handler is modified to only allow writes at the write pointer if the namespace is zoned. For Zone Append command, writes implicitly happen at the write pointer and the starting write pointer value is returned as the result of the command. Read Zeroes handler is modified to add zoned checks that are identical to those done as a part of Write flow. The code to support for Zone Descriptor Extensions is not included in this commit and the driver always reports ZDES 0. A later commit in this series will add ZDE support. This commit doesn't yet include checks for active and open zone limits. It is assumed that there are no limits on either active or open zones. Signed-off-by: Niklas Cassel Signed-off-by: Hans Holmberg Signed-off-by: Ajay Joshi Signed-off-by: Chaitanya Kulkarni Signed-off-by: Matias Bjorling Signed-off-by: Aravind Ramesh Signed-off-by: Shin'ichiro Kawasaki Signed-off-by: Adam Manzanares Signed-off-by: Dmitry Fomichev --- hw/block/nvme.c | 963 ++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 933 insertions(+), 30 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 453f4747a5..2e03b0b6ed 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -37,6 +37,7 @@ #include "qemu/osdep.h" #include "qemu/units.h" #include "qemu/error-report.h" +#include "crypto/random.h" #include "hw/block/block.h" #include "hw/pci/msix.h" #include "hw/pci/pci.h" @@ -69,6 +70,98 @@ static void nvme_process_sq(void *opaque); +/* + * Add a zone to the tail of a zone list. + */ +static void nvme_add_zone_tail(NvmeCtrl *n, NvmeNamespace *ns, NvmeZoneList *zl, + NvmeZone *zone) +{ + uint32_t idx = (uint32_t)(zone - ns->zone_array); + + assert(nvme_zone_not_in_list(zone)); + + if (!zl->size) { + zl->head = zl->tail = idx; + zone->next = zone->prev = NVME_ZONE_LIST_NIL; + } else { + ns->zone_array[zl->tail].next = idx; + zone->prev = zl->tail; + zone->next = NVME_ZONE_LIST_NIL; + zl->tail = idx; + } + zl->size++; +} + +/* + * Remove a zone from a zone list. The zone must be linked in the list. + */ +static void nvme_remove_zone(NvmeCtrl *n, NvmeNamespace *ns, NvmeZoneList *zl, + NvmeZone *zone) +{ + uint32_t idx = (uint32_t)(zone - ns->zone_array); + + assert(!nvme_zone_not_in_list(zone)); + + --zl->size; + if (zl->size == 0) { + zl->head = NVME_ZONE_LIST_NIL; + zl->tail = NVME_ZONE_LIST_NIL; + } else if (idx == zl->head) { + zl->head = zone->next; + ns->zone_array[zl->head].prev = NVME_ZONE_LIST_NIL; + } else if (idx == zl->tail) { + zl->tail = zone->prev; + ns->zone_array[zl->tail].next = NVME_ZONE_LIST_NIL; + } else { + ns->zone_array[zone->next].prev = zone->prev; + ns->zone_array[zone->prev].next = zone->next; + } + + zone->prev = zone->next = 0; +} + +static void nvme_assign_zone_state(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone, uint8_t state) +{ + if (!nvme_zone_not_in_list(zone)) { + switch (nvme_get_zone_state(zone)) { + case NVME_ZONE_STATE_EXPLICITLY_OPEN: + nvme_remove_zone(n, ns, ns->exp_open_zones, zone); + break; + case NVME_ZONE_STATE_IMPLICITLY_OPEN: + nvme_remove_zone(n, ns, ns->imp_open_zones, zone); + break; + case NVME_ZONE_STATE_CLOSED: + nvme_remove_zone(n, ns, ns->closed_zones, zone); + break; + case NVME_ZONE_STATE_FULL: + nvme_remove_zone(n, ns, ns->full_zones, zone); + } + } + + nvme_set_zone_state(zone, state); + + switch (state) { + case NVME_ZONE_STATE_EXPLICITLY_OPEN: + nvme_add_zone_tail(n, ns, ns->exp_open_zones, zone); + break; + case NVME_ZONE_STATE_IMPLICITLY_OPEN: + nvme_add_zone_tail(n, ns, ns->imp_open_zones, zone); + break; + case NVME_ZONE_STATE_CLOSED: + nvme_add_zone_tail(n, ns, ns->closed_zones, zone); + break; + case NVME_ZONE_STATE_FULL: + nvme_add_zone_tail(n, ns, ns->full_zones, zone); + break; + default: + zone->d.za = 0; + /* fall through */ + case NVME_ZONE_STATE_READ_ONLY: + zone->tstamp = 0; + } +} + static bool nvme_addr_is_cmb(NvmeCtrl *n, hwaddr addr) { hwaddr low = n->ctrl_mem.addr; @@ -314,6 +407,7 @@ static void nvme_post_cqes(void *opaque) QTAILQ_REMOVE(&cq->req_list, req, entry); sq = req->sq; + req->cqe.status = cpu_to_le16((req->status << 1) | cq->phase); req->cqe.sq_id = cpu_to_le16(sq->sqid); req->cqe.sq_head = cpu_to_le16(sq->head); @@ -328,6 +422,30 @@ static void nvme_post_cqes(void *opaque) } } +static void nvme_fill_data(QEMUSGList *qsg, QEMUIOVector *iov, + uint64_t offset, uint8_t pattern) +{ + ScatterGatherEntry *entry; + uint32_t len, ent_len; + + if (qsg->nsg > 0) { + entry = qsg->sg; + for (len = qsg->size; len > 0; len -= ent_len) { + ent_len = MIN(len, entry->len); + if (offset > ent_len) { + offset -= ent_len; + } else if (offset != 0) { + dma_memory_set(qsg->as, entry->base + offset, + pattern, ent_len - offset); + offset = 0; + } else { + dma_memory_set(qsg->as, entry->base, pattern, ent_len); + } + entry++; + } + } +} + static void nvme_enqueue_req_completion(NvmeCQueue *cq, NvmeRequest *req) { assert(cq->cqid == req->sq->cqid); @@ -336,6 +454,114 @@ static void nvme_enqueue_req_completion(NvmeCQueue *cq, NvmeRequest *req) timer_mod(cq->timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + 500); } +static uint16_t nvme_check_zone_write(NvmeZone *zone, uint64_t slba, + uint32_t nlb) +{ + uint16_t status; + + if (unlikely((slba + nlb) > nvme_zone_wr_boundary(zone))) { + return NVME_ZONE_BOUNDARY_ERROR; + } + + switch (nvme_get_zone_state(zone)) { + case NVME_ZONE_STATE_EMPTY: + case NVME_ZONE_STATE_IMPLICITLY_OPEN: + case NVME_ZONE_STATE_EXPLICITLY_OPEN: + case NVME_ZONE_STATE_CLOSED: + status = NVME_SUCCESS; + break; + case NVME_ZONE_STATE_FULL: + status = NVME_ZONE_FULL; + break; + case NVME_ZONE_STATE_OFFLINE: + status = NVME_ZONE_OFFLINE; + break; + case NVME_ZONE_STATE_READ_ONLY: + status = NVME_ZONE_READ_ONLY; + break; + default: + assert(false); + } + return status; +} + +static uint16_t nvme_check_zone_read(NvmeCtrl *n, NvmeZone *zone, uint64_t slba, + uint32_t nlb, bool zone_x_ok) +{ + uint64_t lba = slba, count; + uint16_t status; + uint8_t zs; + + do { + if (!zone_x_ok && (lba + nlb > nvme_zone_rd_boundary(n, zone))) { + return NVME_ZONE_BOUNDARY_ERROR | NVME_DNR; + } + + zs = nvme_get_zone_state(zone); + switch (zs) { + case NVME_ZONE_STATE_EMPTY: + case NVME_ZONE_STATE_IMPLICITLY_OPEN: + case NVME_ZONE_STATE_EXPLICITLY_OPEN: + case NVME_ZONE_STATE_FULL: + case NVME_ZONE_STATE_CLOSED: + case NVME_ZONE_STATE_READ_ONLY: + status = NVME_SUCCESS; + break; + case NVME_ZONE_STATE_OFFLINE: + status = NVME_ZONE_OFFLINE | NVME_DNR; + break; + default: + assert(false); + } + if (status != NVME_SUCCESS) { + break; + } + + if (lba + nlb > nvme_zone_rd_boundary(n, zone)) { + count = nvme_zone_rd_boundary(n, zone) - lba; + } else { + count = nlb; + } + + lba += count; + nlb -= count; + zone++; + } while (nlb); + + return status; +} + +static uint64_t nvme_finalize_zone_write(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone, uint32_t nlb) +{ + uint64_t result = cpu_to_le64(zone->d.wp); + uint8_t zs = nvme_get_zone_state(zone); + + zone->d.wp += nlb; + + if (zone->d.wp == nvme_zone_wr_boundary(zone)) { + switch (zs) { + case NVME_ZONE_STATE_IMPLICITLY_OPEN: + case NVME_ZONE_STATE_EXPLICITLY_OPEN: + case NVME_ZONE_STATE_CLOSED: + case NVME_ZONE_STATE_EMPTY: + break; + default: + assert(false); + } + nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_FULL); + } else { + switch (zs) { + case NVME_ZONE_STATE_EMPTY: + case NVME_ZONE_STATE_CLOSED: + nvme_assign_zone_state(n, ns, zone, + NVME_ZONE_STATE_IMPLICITLY_OPEN); + } + } + + return result; +} + static void nvme_rw_cb(void *opaque, int ret) { NvmeRequest *req = opaque; @@ -344,6 +570,10 @@ static void nvme_rw_cb(void *opaque, int ret) NvmeCQueue *cq = n->cq[sq->cqid]; if (!ret) { + if (req->flags & NVME_REQ_FLG_FILL) { + nvme_fill_data(&req->qsg, &req->iov, req->fill_ofs, + n->params.fill_pattern); + } block_acct_done(blk_get_stats(n->conf.blk), &req->acct); req->status = NVME_SUCCESS; } else { @@ -370,22 +600,53 @@ static uint16_t nvme_write_zeros(NvmeCtrl *n, NvmeNamespace *ns, NvmeCmd *cmd, NvmeRequest *req) { NvmeRwCmd *rw = (NvmeRwCmd *)cmd; + NvmeZone *zone = NULL; const uint8_t lba_index = NVME_ID_NS_FLBAS_INDEX(ns->id_ns.flbas); const uint8_t data_shift = ns->id_ns.lbaf[lba_index].ds; uint64_t slba = le64_to_cpu(rw->slba); uint32_t nlb = le16_to_cpu(rw->nlb) + 1; + uint64_t zone_idx; uint64_t offset = slba << data_shift; uint32_t count = nlb << data_shift; + uint16_t status; if (unlikely(slba + nlb > ns->id_ns.nsze)) { trace_pci_nvme_err_invalid_lba_range(slba, nlb, ns->id_ns.nsze); return NVME_LBA_RANGE | NVME_DNR; } + if (n->params.zoned) { + zone_idx = slba / n->params.zone_size; + if (unlikely(zone_idx >= n->num_zones)) { + trace_pci_nvme_err_capacity_exceeded(zone_idx, n->num_zones); + return NVME_CAP_EXCEEDED | NVME_DNR; + } + + zone = &ns->zone_array[zone_idx]; + + status = nvme_check_zone_write(zone, slba, nlb); + if (status != NVME_SUCCESS) { + trace_pci_nvme_err_zone_write_not_ok(slba, nlb, status); + return status | NVME_DNR; + } + + assert(nvme_wp_is_valid(zone)); + if (unlikely(slba != zone->d.wp)) { + trace_pci_nvme_err_write_not_at_wp(slba, zone->d.zslba, + zone->d.wp); + return NVME_ZONE_INVALID_WRITE | NVME_DNR; + } + } + block_acct_start(blk_get_stats(n->conf.blk), &req->acct, 0, BLOCK_ACCT_WRITE); req->aiocb = blk_aio_pwrite_zeroes(n->conf.blk, offset, count, BDRV_REQ_MAY_UNMAP, nvme_rw_cb, req); + + if (n->params.zoned) { + req->cqe.result64 = nvme_finalize_zone_write(n, ns, zone, nlb); + } + return NVME_NO_COMPLETE; } @@ -393,16 +654,19 @@ static uint16_t nvme_rw(NvmeCtrl *n, NvmeNamespace *ns, NvmeCmd *cmd, NvmeRequest *req) { NvmeRwCmd *rw = (NvmeRwCmd *)cmd; + NvmeZone *zone = NULL; uint32_t nlb = le32_to_cpu(rw->nlb) + 1; uint64_t slba = le64_to_cpu(rw->slba); uint64_t prp1 = le64_to_cpu(rw->prp1); uint64_t prp2 = le64_to_cpu(rw->prp2); - + uint64_t zone_idx = 0; + uint16_t status; uint8_t lba_index = NVME_ID_NS_FLBAS_INDEX(ns->id_ns.flbas); uint8_t data_shift = ns->id_ns.lbaf[lba_index].ds; uint64_t data_size = (uint64_t)nlb << data_shift; - uint64_t data_offset = slba << data_shift; - int is_write = rw->opcode == NVME_CMD_WRITE ? 1 : 0; + uint64_t data_offset; + bool is_write = rw->opcode == NVME_CMD_WRITE || + (req->flags & NVME_REQ_FLG_APPEND); enum BlockAcctType acct = is_write ? BLOCK_ACCT_WRITE : BLOCK_ACCT_READ; trace_pci_nvme_rw(is_write ? "write" : "read", nlb, data_size, slba); @@ -413,11 +677,79 @@ static uint16_t nvme_rw(NvmeCtrl *n, NvmeNamespace *ns, NvmeCmd *cmd, return NVME_LBA_RANGE | NVME_DNR; } + if (n->params.zoned) { + zone_idx = slba / n->params.zone_size; + if (unlikely(zone_idx >= n->num_zones)) { + trace_pci_nvme_err_capacity_exceeded(zone_idx, n->num_zones); + return NVME_CAP_EXCEEDED | NVME_DNR; + } + + zone = &ns->zone_array[zone_idx]; + + if (is_write) { + status = nvme_check_zone_write(zone, slba, nlb); + if (status != NVME_SUCCESS) { + trace_pci_nvme_err_zone_write_not_ok(slba, nlb, status); + return status | NVME_DNR; + } + + assert(nvme_wp_is_valid(zone)); + if (req->flags & NVME_REQ_FLG_APPEND) { + if (unlikely(slba != zone->d.zslba)) { + trace_pci_nvme_err_append_not_at_start(slba, zone->d.zslba); + return NVME_ZONE_INVALID_WRITE | NVME_DNR; + } + if (data_size > (n->page_size << n->zamds)) { + trace_pci_nvme_err_append_too_large(slba, nlb, n->zamds); + return NVME_INVALID_FIELD | NVME_DNR; + } + slba = zone->d.wp; + } else if (unlikely(slba != zone->d.wp)) { + trace_pci_nvme_err_write_not_at_wp(slba, zone->d.zslba, + zone->d.wp); + return NVME_ZONE_INVALID_WRITE | NVME_DNR; + } + } else { + status = nvme_check_zone_read(n, zone, slba, nlb, + n->params.cross_zone_read); + if (status != NVME_SUCCESS) { + trace_pci_nvme_err_zone_read_not_ok(slba, nlb, status); + return status | NVME_DNR; + } + + if (slba + nlb > zone->d.wp) { + /* + * All or some data is read above the WP. Need to + * fill out the buffer area that has no backing data + * with a predefined data pattern (zeros by default) + */ + req->flags |= NVME_REQ_FLG_FILL; + if (slba >= zone->d.wp) { + req->fill_ofs = 0; + } else { + req->fill_ofs = ((zone->d.wp - slba) << data_shift); + } + } + } + } else if (req->flags & NVME_REQ_FLG_APPEND) { + trace_pci_nvme_err_invalid_opc(cmd->opcode); + return NVME_INVALID_OPCODE | NVME_DNR; + } + if (nvme_map_prp(&req->qsg, &req->iov, prp1, prp2, data_size, n)) { block_acct_invalid(blk_get_stats(n->conf.blk), acct); return NVME_INVALID_FIELD | NVME_DNR; } + if (unlikely(!is_write && (req->flags & NVME_REQ_FLG_FILL) && + (req->fill_ofs == 0))) { + /* No backend I/O necessary, only need to fill the buffer */ + nvme_fill_data(&req->qsg, &req->iov, 0, n->params.fill_pattern); + req->status = NVME_SUCCESS; + return NVME_SUCCESS; + } + + data_offset = slba << data_shift; dma_acct_start(n->conf.blk, &req->acct, &req->qsg, acct); if (req->qsg.nsg > 0) { req->flags |= NVME_REQ_FLG_HAS_SG; @@ -434,9 +766,383 @@ static uint16_t nvme_rw(NvmeCtrl *n, NvmeNamespace *ns, NvmeCmd *cmd, req); } + if (is_write && n->params.zoned) { + req->cqe.result64 = nvme_finalize_zone_write(n, ns, zone, nlb); + } + return NVME_NO_COMPLETE; } +static uint16_t nvme_get_mgmt_zone_slba_idx(NvmeCtrl *n, NvmeNamespace *ns, + NvmeCmd *c, uint64_t *slba, uint64_t *zone_idx) +{ + uint32_t dw10 = le32_to_cpu(c->cdw10); + uint32_t dw11 = le32_to_cpu(c->cdw11); + + if (!n->params.zoned) { + trace_pci_nvme_err_invalid_opc(c->opcode); + return NVME_INVALID_OPCODE | NVME_DNR; + } + + *slba = ((uint64_t)dw11) << 32 | dw10; + if (unlikely(*slba >= ns->id_ns.nsze)) { + trace_pci_nvme_err_invalid_lba_range(*slba, 0, ns->id_ns.nsze); + *slba = 0; + return NVME_LBA_RANGE | NVME_DNR; + } + + *zone_idx = *slba / n->params.zone_size; + if (unlikely(*zone_idx >= n->num_zones)) { + trace_pci_nvme_err_capacity_exceeded(*zone_idx, n->num_zones); + *zone_idx = 0; + return NVME_CAP_EXCEEDED | NVME_DNR; + } + + return NVME_SUCCESS; +} + +static uint16_t nvme_open_zone(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone, uint8_t state) +{ + switch (state) { + case NVME_ZONE_STATE_EMPTY: + case NVME_ZONE_STATE_CLOSED: + case NVME_ZONE_STATE_IMPLICITLY_OPEN: + nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_EXPLICITLY_OPEN); + /* fall through */ + case NVME_ZONE_STATE_EXPLICITLY_OPEN: + return NVME_SUCCESS; + } + + return NVME_ZONE_INVAL_TRANSITION; +} + +static bool nvme_cond_open_all(uint8_t state) +{ + return state == NVME_ZONE_STATE_CLOSED; +} + +static uint16_t nvme_close_zone(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone, uint8_t state) +{ + switch (state) { + case NVME_ZONE_STATE_EXPLICITLY_OPEN: + case NVME_ZONE_STATE_IMPLICITLY_OPEN: + nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_CLOSED); + /* fall through */ + case NVME_ZONE_STATE_CLOSED: + return NVME_SUCCESS; + } + + return NVME_ZONE_INVAL_TRANSITION; +} + +static bool nvme_cond_close_all(uint8_t state) +{ + return state == NVME_ZONE_STATE_IMPLICITLY_OPEN || + state == NVME_ZONE_STATE_EXPLICITLY_OPEN; +} + +static uint16_t nvme_finish_zone(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone, uint8_t state) +{ + switch (state) { + case NVME_ZONE_STATE_EXPLICITLY_OPEN: + case NVME_ZONE_STATE_IMPLICITLY_OPEN: + case NVME_ZONE_STATE_CLOSED: + case NVME_ZONE_STATE_EMPTY: + zone->d.wp = nvme_zone_wr_boundary(zone); + nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_FULL); + /* fall through */ + case NVME_ZONE_STATE_FULL: + return NVME_SUCCESS; + } + + return NVME_ZONE_INVAL_TRANSITION; +} + +static bool nvme_cond_finish_all(uint8_t state) +{ + return state == NVME_ZONE_STATE_IMPLICITLY_OPEN || + state == NVME_ZONE_STATE_EXPLICITLY_OPEN || + state == NVME_ZONE_STATE_CLOSED; +} + +static uint16_t nvme_reset_zone(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone, uint8_t state) +{ + switch (state) { + case NVME_ZONE_STATE_EXPLICITLY_OPEN: + case NVME_ZONE_STATE_IMPLICITLY_OPEN: + case NVME_ZONE_STATE_CLOSED: + case NVME_ZONE_STATE_FULL: + zone->d.wp = zone->d.zslba; + nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_EMPTY); + /* fall through */ + case NVME_ZONE_STATE_EMPTY: + return NVME_SUCCESS; + } + + return NVME_ZONE_INVAL_TRANSITION; +} + +static bool nvme_cond_reset_all(uint8_t state) +{ + return state == NVME_ZONE_STATE_IMPLICITLY_OPEN || + state == NVME_ZONE_STATE_EXPLICITLY_OPEN || + state == NVME_ZONE_STATE_CLOSED || + state == NVME_ZONE_STATE_FULL; +} + +static uint16_t nvme_offline_zone(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone, uint8_t state) +{ + switch (state) { + case NVME_ZONE_STATE_READ_ONLY: + nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_OFFLINE); + /* fall through */ + case NVME_ZONE_STATE_OFFLINE: + return NVME_SUCCESS; + } + + return NVME_ZONE_INVAL_TRANSITION; +} + +static bool nvme_cond_offline_all(uint8_t state) +{ + return state == NVME_ZONE_STATE_READ_ONLY; +} + +static uint16_t name_do_zone_op(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone, uint8_t state, bool all, + uint16_t (*op_hndlr)(NvmeCtrl *, NvmeNamespace *, NvmeZone *, + uint8_t), bool (*proc_zone)(uint8_t)) +{ + int i; + uint16_t status = 0; + + if (!all) { + status = op_hndlr(n, ns, zone, state); + } else { + for (i = 0; i < n->num_zones; i++, zone++) { + state = nvme_get_zone_state(zone); + if (proc_zone(state)) { + status = op_hndlr(n, ns, zone, state); + if (status != NVME_SUCCESS) { + break; + } + } + } + } + + return status; +} + +static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, NvmeNamespace *ns, + NvmeCmd *cmd, NvmeRequest *req) +{ + uint32_t dw13 = le32_to_cpu(cmd->cdw13); + uint64_t slba = 0; + uint64_t zone_idx = 0; + uint16_t status; + uint8_t action, state; + bool all; + NvmeZone *zone; + + action = dw13 & 0xff; + all = dw13 & 0x100; + + req->status = NVME_SUCCESS; + + if (!all) { + status = nvme_get_mgmt_zone_slba_idx(n, ns, cmd, &slba, &zone_idx); + if (status) { + return status; + } + } + + zone = &ns->zone_array[zone_idx]; + if (slba != zone->d.zslba) { + trace_pci_nvme_err_unaligned_zone_cmd(action, slba, zone->d.zslba); + return NVME_INVALID_FIELD | NVME_DNR; + } + state = nvme_get_zone_state(zone); + + switch (action) { + + case NVME_ZONE_ACTION_OPEN: + trace_pci_nvme_open_zone(slba, zone_idx, all); + status = name_do_zone_op(n, ns, zone, state, all, + nvme_open_zone, nvme_cond_open_all); + break; + + case NVME_ZONE_ACTION_CLOSE: + trace_pci_nvme_close_zone(slba, zone_idx, all); + status = name_do_zone_op(n, ns, zone, state, all, + nvme_close_zone, nvme_cond_close_all); + break; + + case NVME_ZONE_ACTION_FINISH: + trace_pci_nvme_finish_zone(slba, zone_idx, all); + status = name_do_zone_op(n, ns, zone, state, all, + nvme_finish_zone, nvme_cond_finish_all); + break; + + case NVME_ZONE_ACTION_RESET: + trace_pci_nvme_reset_zone(slba, zone_idx, all); + status = name_do_zone_op(n, ns, zone, state, all, + nvme_reset_zone, nvme_cond_reset_all); + break; + + case NVME_ZONE_ACTION_OFFLINE: + trace_pci_nvme_offline_zone(slba, zone_idx, all); + status = name_do_zone_op(n, ns, zone, state, all, + nvme_offline_zone, nvme_cond_offline_all); + break; + + case NVME_ZONE_ACTION_SET_ZD_EXT: + trace_pci_nvme_set_descriptor_extension(slba, zone_idx); + return NVME_INVALID_FIELD | NVME_DNR; + break; + + default: + trace_pci_nvme_err_invalid_mgmt_action(action); + status = NVME_INVALID_FIELD; + } + + if (status == NVME_ZONE_INVAL_TRANSITION) { + trace_pci_nvme_err_invalid_zone_state_transition(state, action, slba, + zone->d.za); + } + if (status) { + status |= NVME_DNR; + } + + return status; +} + +static bool nvme_zone_matches_filter(uint32_t zafs, NvmeZone *zl) +{ + int zs = nvme_get_zone_state(zl); + + switch (zafs) { + case NVME_ZONE_REPORT_ALL: + return true; + case NVME_ZONE_REPORT_EMPTY: + return (zs == NVME_ZONE_STATE_EMPTY); + case NVME_ZONE_REPORT_IMPLICITLY_OPEN: + return (zs == NVME_ZONE_STATE_IMPLICITLY_OPEN); + case NVME_ZONE_REPORT_EXPLICITLY_OPEN: + return (zs == NVME_ZONE_STATE_EXPLICITLY_OPEN); + case NVME_ZONE_REPORT_CLOSED: + return (zs == NVME_ZONE_STATE_CLOSED); + case NVME_ZONE_REPORT_FULL: + return (zs == NVME_ZONE_STATE_FULL); + case NVME_ZONE_REPORT_READ_ONLY: + return (zs == NVME_ZONE_STATE_READ_ONLY); + case NVME_ZONE_REPORT_OFFLINE: + return (zs == NVME_ZONE_STATE_OFFLINE); + default: + return false; + } +} + +static uint16_t nvme_zone_mgmt_recv(NvmeCtrl *n, NvmeNamespace *ns, + NvmeCmd *cmd, NvmeRequest *req) +{ + uint64_t prp1 = le64_to_cpu(cmd->prp1); + uint64_t prp2 = le64_to_cpu(cmd->prp2); + /* cdw12 is zero-based number of dwords to return. Convert to bytes */ + uint32_t len = (le32_to_cpu(cmd->cdw12) + 1) << 2; + uint32_t dw13 = le32_to_cpu(cmd->cdw13); + uint32_t zra, zrasf, partial; + uint64_t max_zones, zone_index, nr_zones = 0; + uint16_t ret; + uint64_t slba; + NvmeZoneDescr *z; + NvmeZone *zs; + NvmeZoneReportHeader *header; + void *buf, *buf_p; + size_t zone_entry_sz; + + req->status = NVME_SUCCESS; + + ret = nvme_get_mgmt_zone_slba_idx(n, ns, cmd, &slba, &zone_index); + if (ret) { + return ret; + } + + if (len < sizeof(NvmeZoneReportHeader)) { + return NVME_INVALID_FIELD | NVME_DNR; + } + + zra = dw13 & 0xff; + if (!(zra == NVME_ZONE_REPORT || zra == NVME_ZONE_REPORT_EXTENDED)) { + return NVME_INVALID_FIELD | NVME_DNR; + } + + if (zra == NVME_ZONE_REPORT_EXTENDED) { + return NVME_INVALID_FIELD | NVME_DNR; + } + + zrasf = (dw13 >> 8) & 0xff; + if (zrasf > NVME_ZONE_REPORT_OFFLINE) { + return NVME_INVALID_FIELD | NVME_DNR; + } + + partial = (dw13 >> 16) & 0x01; + + zone_entry_sz = sizeof(NvmeZoneDescr); + + max_zones = (len - sizeof(NvmeZoneReportHeader)) / zone_entry_sz; + buf = g_malloc0(len); + + header = (NvmeZoneReportHeader *)buf; + buf_p = buf + sizeof(NvmeZoneReportHeader); + + while (zone_index < n->num_zones && nr_zones < max_zones) { + zs = &ns->zone_array[zone_index]; + + if (!nvme_zone_matches_filter(zrasf, zs)) { + zone_index++; + continue; + } + + z = (NvmeZoneDescr *)buf_p; + buf_p += sizeof(NvmeZoneDescr); + nr_zones++; + + z->zt = zs->d.zt; + z->zs = zs->d.zs; + z->zcap = cpu_to_le64(zs->d.zcap); + z->zslba = cpu_to_le64(zs->d.zslba); + z->za = zs->d.za; + + if (nvme_wp_is_valid(zs)) { + z->wp = cpu_to_le64(zs->d.wp); + } else { + z->wp = cpu_to_le64(~0ULL); + } + + zone_index++; + } + + if (!partial) { + for (; zone_index < n->num_zones; zone_index++) { + zs = &ns->zone_array[zone_index]; + if (nvme_zone_matches_filter(zrasf, zs)) { + nr_zones++; + } + } + } + header->nr_zones = cpu_to_le64(nr_zones); + + ret = nvme_dma_read_prp(n, (uint8_t *)buf, len, prp1, prp2); + g_free(buf); + + return ret; +} + static uint16_t nvme_io_cmd(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) { NvmeNamespace *ns; @@ -453,9 +1159,16 @@ static uint16_t nvme_io_cmd(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) return nvme_flush(n, ns, cmd, req); case NVME_CMD_WRITE_ZEROS: return nvme_write_zeros(n, ns, cmd, req); + case NVME_CMD_ZONE_APND: + req->flags |= NVME_REQ_FLG_APPEND; + /* fall through */ case NVME_CMD_WRITE: case NVME_CMD_READ: return nvme_rw(n, ns, cmd, req); + case NVME_CMD_ZONE_MGMT_SEND: + return nvme_zone_mgmt_send(n, ns, cmd, req); + case NVME_CMD_ZONE_MGMT_RECV: + return nvme_zone_mgmt_recv(n, ns, cmd, req); default: trace_pci_nvme_err_invalid_opc(cmd->opcode); return NVME_INVALID_OPCODE | NVME_DNR; @@ -675,6 +1388,16 @@ static uint16_t nvme_create_cq(NvmeCtrl *n, NvmeCmd *cmd) return NVME_SUCCESS; } +static inline bool nvme_csi_has_nvm_support(NvmeNamespace *ns) +{ + switch (ns->csi) { + case NVME_CSI_NVM: + case NVME_CSI_ZONED: + return true; + } + return false; +} + static uint16_t nvme_identify_ctrl(NvmeCtrl *n, NvmeIdentify *c) { uint64_t prp1 = le64_to_cpu(c->prp1); @@ -701,6 +1424,12 @@ static uint16_t nvme_identify_ctrl_csi(NvmeCtrl *n, NvmeIdentify *c) ret = nvme_dma_read_prp(n, (uint8_t *)list, data_len, prp1, prp2); g_free(list); return ret; + } else if (c->csi == NVME_CSI_ZONED && n->params.zoned) { + NvmeIdCtrlZoned *id = g_malloc0(sizeof(*id)); + id->zamds = n->zamds; + ret = nvme_dma_read_prp(n, (uint8_t *)id, sizeof(*id), prp1, prp2); + g_free(id); + return ret; } else { return NVME_INVALID_FIELD | NVME_DNR; } @@ -723,8 +1452,12 @@ static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeIdentify *c) ns = &n->namespaces[nsid - 1]; assert(nsid == ns->nsid); - return nvme_dma_read_prp(n, (uint8_t *)&ns->id_ns, sizeof(ns->id_ns), - prp1, prp2); + if (c->csi == NVME_CSI_NVM && nvme_csi_has_nvm_support(ns)) { + return nvme_dma_read_prp(n, (uint8_t *)&ns->id_ns, sizeof(ns->id_ns), + prp1, prp2); + } + + return NVME_INVALID_CMD_SET | NVME_DNR; } static uint16_t nvme_identify_ns_csi(NvmeCtrl *n, NvmeIdentify *c) @@ -747,14 +1480,17 @@ static uint16_t nvme_identify_ns_csi(NvmeCtrl *n, NvmeIdentify *c) ns = &n->namespaces[nsid - 1]; assert(nsid == ns->nsid); - if (c->csi == NVME_CSI_NVM) { + if (c->csi == NVME_CSI_NVM && nvme_csi_has_nvm_support(ns)) { list = g_malloc0(data_len); ret = nvme_dma_read_prp(n, (uint8_t *)list, data_len, prp1, prp2); g_free(list); return ret; - } else { - return NVME_INVALID_FIELD | NVME_DNR; + } else if (c->csi == NVME_CSI_ZONED && ns->csi == NVME_CSI_ZONED) { + return nvme_dma_read_prp(n, (uint8_t *)ns->id_ns_zoned, + sizeof(*ns->id_ns_zoned), prp1, prp2); } + + return NVME_INVALID_FIELD | NVME_DNR; } static uint16_t nvme_identify_nslist(NvmeCtrl *n, NvmeIdentify *c) @@ -796,13 +1532,13 @@ static uint16_t nvme_identify_nslist_csi(NvmeCtrl *n, NvmeIdentify *c) trace_pci_nvme_identify_nslist_csi(min_nsid, c->csi); - if (c->csi != NVME_CSI_NVM) { + if (c->csi != NVME_CSI_NVM && c->csi != NVME_CSI_ZONED) { return NVME_INVALID_FIELD | NVME_DNR; } list = g_malloc0(data_len); for (i = 0; i < n->num_namespaces; i++) { - if (i < min_nsid) { + if (i < min_nsid || c->csi != n->namespaces[i].csi) { continue; } list[j++] = cpu_to_le32(i + 1); @@ -851,7 +1587,7 @@ static uint16_t nvme_list_ns_descriptors(NvmeCtrl *n, NvmeIdentify *c) desc->nidt = NVME_NIDT_CSI; desc->nidl = NVME_NIDL_CSI; buf_ptr += sizeof(*desc); - *(uint8_t *)buf_ptr = NVME_CSI_NVM; + *(uint8_t *)buf_ptr = ns->csi; status = nvme_dma_read_prp(n, buf, data_len, prp1, prp2); g_free(buf); @@ -872,6 +1608,9 @@ static uint16_t nvme_identify_cmd_set(NvmeCtrl *n, NvmeIdentify *c) list = g_malloc0(data_len); ptr = (uint8_t *)list; NVME_SET_CSI(*ptr, NVME_CSI_NVM); + if (n->params.zoned) { + NVME_SET_CSI(*ptr, NVME_CSI_ZONED); + } status = nvme_dma_read_prp(n, (uint8_t *)list, data_len, prp1, prp2); g_free(list); return status; @@ -1038,7 +1777,7 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) } static uint16_t nvme_handle_cmd_effects(NvmeCtrl *n, NvmeCmd *cmd, - uint64_t prp1, uint64_t prp2, uint64_t ofs, uint32_t len) + uint64_t prp1, uint64_t prp2, uint64_t ofs, uint32_t len, uint8_t csi) { NvmeEffectsLog cmd_eff_log = {}; uint32_t *iocs = cmd_eff_log.iocs; @@ -1063,11 +1802,19 @@ static uint16_t nvme_handle_cmd_effects(NvmeCtrl *n, NvmeCmd *cmd, iocs[NVME_ADM_CMD_GET_FEATURES] = NVME_CMD_EFFECTS_CSUPP; iocs[NVME_ADM_CMD_GET_LOG_PAGE] = NVME_CMD_EFFECTS_CSUPP; - iocs[NVME_CMD_FLUSH] = NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC; - iocs[NVME_CMD_WRITE_ZEROS] = NVME_CMD_EFFECTS_CSUPP | - NVME_CMD_EFFECTS_LBCC; - iocs[NVME_CMD_WRITE] = NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC; - iocs[NVME_CMD_READ] = NVME_CMD_EFFECTS_CSUPP; + if (NVME_CC_CSS(n->bar.cc) != CSS_ADMIN_ONLY) { + iocs[NVME_CMD_FLUSH] = NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC; + iocs[NVME_CMD_WRITE_ZEROS] = NVME_CMD_EFFECTS_CSUPP | + NVME_CMD_EFFECTS_LBCC; + iocs[NVME_CMD_WRITE] = NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC; + iocs[NVME_CMD_READ] = NVME_CMD_EFFECTS_CSUPP; + } + if (csi == NVME_CSI_ZONED && NVME_CC_CSS(n->bar.cc) == CSS_ALL_NSTYPES) { + iocs[NVME_CMD_ZONE_APND] = NVME_CMD_EFFECTS_CSUPP | + NVME_CMD_EFFECTS_LBCC; + iocs[NVME_CMD_ZONE_MGMT_SEND] = NVME_CMD_EFFECTS_CSUPP; + iocs[NVME_CMD_ZONE_MGMT_RECV] = NVME_CMD_EFFECTS_CSUPP; + } return nvme_dma_read_prp(n, (uint8_t *)&cmd_eff_log, len, prp1, prp2); } @@ -1083,6 +1830,7 @@ static uint16_t nvme_get_log_page(NvmeCtrl *n, NvmeCmd *cmd) uint64_t ofs = (dw13 << 32) | dw12; uint32_t numdl, numdu, len; uint16_t lid = dw10 & 0xff; + uint8_t csi = le32_to_cpu(cmd->cdw14) >> 24; numdl = dw10 >> 16; numdu = dw11 & 0xffff; @@ -1090,8 +1838,8 @@ static uint16_t nvme_get_log_page(NvmeCtrl *n, NvmeCmd *cmd) switch (lid) { case NVME_LOG_CMD_EFFECTS: - return nvme_handle_cmd_effects(n, cmd, prp1, prp2, ofs, len); - } + return nvme_handle_cmd_effects(n, cmd, prp1, prp2, ofs, len, csi); + } trace_pci_nvme_unsupported_log_page(lid); return NVME_INVALID_FIELD | NVME_DNR; @@ -1255,6 +2003,14 @@ static int nvme_start_ctrl(NvmeCtrl *n) return -1; } + if (n->params.zoned) { + if (!n->params.zamds_bs) { + n->params.zamds_bs = NVME_DEFAULT_MAX_ZA_SIZE; + } + n->params.zamds_bs *= KiB; + n->zamds = nvme_ilog2(n->params.zamds_bs / page_size); + } + n->page_bits = page_bits; n->page_size = page_size; n->max_prp_ents = n->page_size / sizeof(uint64_t); @@ -1324,6 +2080,11 @@ static void nvme_write_bar(NvmeCtrl *n, hwaddr offset, uint64_t data, } switch (NVME_CC_CSS(data)) { case CSS_NVM_ONLY: + if (n->params.zoned) { + NVME_GUEST_ERR(pci_nvme_err_only_zoned_cmd_set_avail, + "only NVM+ZONED command set can be selected"); + break; + } trace_pci_nvme_css_nvm_cset_selected_by_host(data & 0xffffffff); break; case CSS_ALL_NSTYPES: @@ -1609,6 +2370,120 @@ static const MemoryRegionOps nvme_cmb_ops = { }, }; +static int nvme_init_zone_meta(NvmeCtrl *n, NvmeNamespace *ns, + uint64_t capacity) +{ + NvmeZone *zone; + uint64_t start = 0, zone_size = n->params.zone_size; + int i; + + ns->zone_array = g_malloc0(n->zone_array_size); + ns->exp_open_zones = g_malloc0(sizeof(NvmeZoneList)); + ns->imp_open_zones = g_malloc0(sizeof(NvmeZoneList)); + ns->closed_zones = g_malloc0(sizeof(NvmeZoneList)); + ns->full_zones = g_malloc0(sizeof(NvmeZoneList)); + zone = ns->zone_array; + + nvme_init_zone_list(ns->exp_open_zones); + nvme_init_zone_list(ns->imp_open_zones); + nvme_init_zone_list(ns->closed_zones); + nvme_init_zone_list(ns->full_zones); + + for (i = 0; i < n->num_zones; i++, zone++) { + if (start + zone_size > capacity) { + zone_size = capacity - start; + } + zone->d.zt = NVME_ZONE_TYPE_SEQ_WRITE; + nvme_set_zone_state(zone, NVME_ZONE_STATE_EMPTY); + zone->d.za = 0; + zone->d.zcap = n->params.zone_capacity; + zone->d.zslba = start; + zone->d.wp = start; + zone->prev = 0; + zone->next = 0; + start += zone_size; + } + + return 0; +} + +static void nvme_zoned_init_ctrl(NvmeCtrl *n, Error **errp) +{ + uint64_t zone_size = 0, capacity; + uint32_t nz; + + if (n->params.zone_size) { + zone_size = n->params.zone_size; + } else { + zone_size = NVME_DEFAULT_ZONE_SIZE; + } + if (!n->params.zone_capacity) { + n->params.zone_capacity = zone_size; + } + n->zone_size_bs = zone_size * MiB; + n->params.zone_size = n->zone_size_bs / n->conf.logical_block_size; + capacity = n->params.zone_capacity * MiB; + n->params.zone_capacity = capacity / n->conf.logical_block_size; + if (n->params.zone_capacity > n->params.zone_size) { + error_setg(errp, "zone capacity exceeds zone size"); + return; + } + zone_size = n->params.zone_size; + + capacity = n->ns_size / n->conf.logical_block_size; + nz = DIV_ROUND_UP(capacity, zone_size); + n->num_zones = nz; + n->zone_array_size = sizeof(NvmeZone) * nz; + + return; +} + +static int nvme_zoned_init_ns(NvmeCtrl *n, NvmeNamespace *ns, int lba_index, + Error **errp) +{ + int ret; + + ret = nvme_init_zone_meta(n, ns, n->num_zones * n->params.zone_size); + if (ret) { + error_setg(errp, "could not init zone metadata"); + return -1; + } + + ns->id_ns_zoned = g_malloc0(sizeof(*ns->id_ns_zoned)); + + /* MAR/MOR are zeroes-based, 0xffffffff means no limit */ + ns->id_ns_zoned->mar = 0xffffffff; + ns->id_ns_zoned->mor = 0xffffffff; + ns->id_ns_zoned->zoc = 0; + ns->id_ns_zoned->ozcs = n->params.cross_zone_read ? 0x01 : 0x00; + + ns->id_ns_zoned->lbafe[lba_index].zsze = cpu_to_le64(n->params.zone_size); + ns->id_ns_zoned->lbafe[lba_index].zdes = 0; + + if (n->params.fill_pattern == 0) { + ns->id_ns.dlfeat = 0x01; + } else if (n->params.fill_pattern == 0xff) { + ns->id_ns.dlfeat = 0x02; + } + + return 0; +} + +static void nvme_zoned_clear(NvmeCtrl *n) +{ + int i; + + for (i = 0; i < n->num_namespaces; i++) { + NvmeNamespace *ns = &n->namespaces[i]; + g_free(ns->id_ns_zoned); + g_free(ns->zone_array); + g_free(ns->exp_open_zones); + g_free(ns->imp_open_zones); + g_free(ns->closed_zones); + g_free(ns->full_zones); + } +} + static void nvme_check_constraints(NvmeCtrl *n, Error **errp) { NvmeParams *params = &n->params; @@ -1674,18 +2549,13 @@ static void nvme_init_state(NvmeCtrl *n) static void nvme_init_blk(NvmeCtrl *n, Error **errp) { + int64_t bs_size; + if (!blkconf_blocksizes(&n->conf, errp)) { return; } blkconf_apply_backend_options(&n->conf, blk_is_read_only(n->conf.blk), false, errp); -} - -static void nvme_init_namespace(NvmeCtrl *n, NvmeNamespace *ns, Error **errp) -{ - int64_t bs_size; - NvmeIdNs *id_ns = &ns->id_ns; - int lba_index; bs_size = blk_getlength(n->conf.blk); if (bs_size < 0) { @@ -1694,6 +2564,12 @@ static void nvme_init_namespace(NvmeCtrl *n, NvmeNamespace *ns, Error **errp) } n->ns_size = bs_size; +} + +static void nvme_init_namespace(NvmeCtrl *n, NvmeNamespace *ns, Error **errp) +{ + NvmeIdNs *id_ns = &ns->id_ns; + int lba_index; ns->csi = NVME_CSI_NVM; qemu_uuid_generate(&ns->uuid); /* TODO make UUIDs persistent */ @@ -1701,8 +2577,18 @@ static void nvme_init_namespace(NvmeCtrl *n, NvmeNamespace *ns, Error **errp) id_ns->lbaf[lba_index].ds = nvme_ilog2(n->conf.logical_block_size); id_ns->nsze = cpu_to_le64(nvme_ns_nlbas(n, ns)); + if (n->params.zoned) { + ns->csi = NVME_CSI_ZONED; + id_ns->ncap = cpu_to_le64(n->params.zone_capacity * n->num_zones); + if (nvme_zoned_init_ns(n, ns, lba_index, errp) != 0) { + return; + } + } else { + ns->csi = NVME_CSI_NVM; + id_ns->ncap = id_ns->nsze; + } + /* no thin provisioning */ - id_ns->ncap = id_ns->nsze; id_ns->nuse = id_ns->ncap; } @@ -1817,7 +2703,7 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pci_dev) id->ieee[2] = 0xb3; id->oacs = cpu_to_le16(0); id->frmw = 7 << 1; - id->lpa = 1 << 0; + id->lpa = 1 << 1; id->sqes = (0x6 << 4) | 0x6; id->cqes = (0x4 << 4) | 0x4; id->nn = cpu_to_le32(n->num_namespaces); @@ -1834,8 +2720,9 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pci_dev) NVME_CAP_SET_CQR(n->bar.cap, 1); NVME_CAP_SET_TO(n->bar.cap, 0xf); /* - * The driver now always supports NS Types, but all commands that - * support CSI field will only handle NVM Command Set. + * The driver now always supports NS Types, even when "zoned" property + * is set to zero. If this is the case, all commands that support CSI field + * only handle NVM Command Set. */ NVME_CAP_SET_CSS(n->bar.cap, (CAP_CSS_NVM | CAP_CSS_CSI_SUPP)); NVME_CAP_SET_MPSMAX(n->bar.cap, 4); @@ -1871,6 +2758,13 @@ static void nvme_realize(PCIDevice *pci_dev, Error **errp) return; } + if (n->params.zoned) { + nvme_zoned_init_ctrl(n, &local_err); + if (local_err) { + error_propagate(errp, local_err); + return; + } + } nvme_init_ctrl(n, pci_dev); ns = n->namespaces; @@ -1889,6 +2783,9 @@ static void nvme_exit(PCIDevice *pci_dev) NvmeCtrl *n = NVME(pci_dev); nvme_clear_ctrl(n); + if (n->params.zoned) { + nvme_zoned_clear(n); + } g_free(n->namespaces); g_free(n->cq); g_free(n->sq); @@ -1912,6 +2809,12 @@ static Property nvme_props[] = { DEFINE_PROP_UINT32("num_queues", NvmeCtrl, params.num_queues, 0), DEFINE_PROP_UINT32("max_ioqpairs", NvmeCtrl, params.max_ioqpairs, 64), DEFINE_PROP_UINT16("msix_qsize", NvmeCtrl, params.msix_qsize, 65), + DEFINE_PROP_BOOL("zoned", NvmeCtrl, params.zoned, false), + DEFINE_PROP_UINT64("zone_size", NvmeCtrl, params.zone_size, 512), + DEFINE_PROP_UINT64("zone_capacity", NvmeCtrl, params.zone_capacity, 512), + DEFINE_PROP_UINT32("zone_append_max_size", NvmeCtrl, params.zamds_bs, 0), + DEFINE_PROP_BOOL("cross_zone_read", NvmeCtrl, params.cross_zone_read, true), + DEFINE_PROP_UINT8("fill_pattern", NvmeCtrl, params.fill_pattern, 0), DEFINE_PROP_END_OF_LIST(), }; From patchwork Wed Jun 17 21:34:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Fomichev X-Patchwork-Id: 11610819 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9B25C92A for ; Wed, 17 Jun 2020 22:07:41 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 61A3A21548 for ; Wed, 17 Jun 2020 22:07:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="QCSjRc0g" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 61A3A21548 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:39604 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jlgDc-0007Fd-LU for patchwork-qemu-devel@patchwork.kernel.org; Wed, 17 Jun 2020 18:07:40 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46202) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfiF-0004U9-9L; Wed, 17 Jun 2020 17:35:16 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:29879) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfiA-0005JL-Md; Wed, 17 Jun 2020 17:35:14 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1592429710; x=1623965710; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kwrY53sPRnDUcfHWzyMpaug9UTWDAFQUu7bDu1cvpK0=; b=QCSjRc0g6HnY3qdspy25AJBI63p59P7vvDjKURuDn13akWlxiR/zC5fW mVYO7GWE922nJFJJ8ZE6TSjylBBQwQKC5zUbbmIAAfAAg9aYU6GkQ/No7 fYZSl5YULWL406r5u+DPPrpKGUNK3m1d541HhaazBDgEQAsl6HuXGOdUt yDiRtKGxfChdGjQBA9L84oMt3aRoRf+vW73qb3sNPQLcl+G0/BsyyjJji OQfVuPcw51gTDf4lcT2JKOw10UlLXURZMB3lUKBydijmX74MfpYIa2WPp qbkOs4HuUEXtKF7RwIi/XYTk3wTi/cu7h70E5unuRMm4FC8+DOj+jcUKb A==; IronPort-SDR: CIysAxiTHJS3QV6KjevBxXOtozcyAj8qnbPYnJxdGxBGfTg/lWbahQD6bSZNQuTG+P8f0X8GoQ 9tIJncR6DyZkJXQhATmi7eW+/aMAqhOEO0o5MHBm3c0Y2wzG1UpnpXbMOhli/bWHD4BUEUqTlU 4RZu+deELLbweaTln/ddo5FRD/2QTmk73E4dB5bqsIv0WmgLqZOxgLiuaJKJJ06E4J6ckzxnDt yWiaXhp4odeke9nJK6k83KuIQhIZG1ZQqqi8pUYS0iTy2vcn52pU9sQJPe8Iken39rW+BrFQPu nqE= X-IronPort-AV: E=Sophos;i="5.73,523,1583164800"; d="scan'208";a="249439825" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 18 Jun 2020 05:34:48 +0800 IronPort-SDR: hOQQwUvCp6OpV7dEghQhBy4dbKciFtJ4IVIwRzgiChOIk+exomehjGk0uLKgDzj8mETxPeWND1 CZuyGowuVxH0ZAraTgXRS1Elobbi8P/dU= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 14:23:29 -0700 IronPort-SDR: iYeozAJ6zYhFYG2661x5fUAPbBXfUbO/tqFpYjwxiWzMYYbd1SeBDr+FzF0SWS+U8mkXLzKzrM 1TyXiH021tBA== WDCIronportException: Internal Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 17 Jun 2020 14:34:46 -0700 From: Dmitry Fomichev To: Kevin Wolf , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Maxim Levitsky Subject: [PATCH v2 11/18] hw/block/nvme: Introduce max active and open zone limits Date: Thu, 18 Jun 2020 06:34:08 +0900 Message-Id: <20200617213415.22417-12-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200617213415.22417-1-dmitry.fomichev@wdc.com> References: <20200617213415.22417-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=430b82a1d=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/17 17:34:28 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Matias Bjorling Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Added two module properties, "max_active" and "max_open" to control the maximum number of zones that can be active or open. Once these variables are set to non-default values, the driver checks these limits during I/O and returns Too Many Active or Too Many Open command status if they are exceeded. Signed-off-by: Hans Holmberg Signed-off-by: Dmitry Fomichev --- hw/block/nvme.c | 183 +++++++++++++++++++++++++++++++++++++++++++++++- hw/block/nvme.h | 4 ++ 2 files changed, 185 insertions(+), 2 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 2e03b0b6ed..05a7cbcfcc 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -120,6 +120,87 @@ static void nvme_remove_zone(NvmeCtrl *n, NvmeNamespace *ns, NvmeZoneList *zl, zone->prev = zone->next = 0; } +/* + * Take the first zone out from a list, return NULL if the list is empty. + */ +static NvmeZone *nvme_remove_zone_head(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZoneList *zl) +{ + NvmeZone *zone = nvme_peek_zone_head(ns, zl); + + if (zone) { + --zl->size; + if (zl->size == 0) { + zl->head = NVME_ZONE_LIST_NIL; + zl->tail = NVME_ZONE_LIST_NIL; + } else { + zl->head = zone->next; + ns->zone_array[zl->head].prev = NVME_ZONE_LIST_NIL; + } + zone->prev = zone->next = 0; + } + + return zone; +} + +/* + * Check if we can open a zone without exceeding open/active limits. + * AOR stands for "Active and Open Resources" (see TP 4053 section 2.5). + */ +static int nvme_aor_check(NvmeCtrl *n, NvmeNamespace *ns, + uint32_t act, uint32_t opn) +{ + if (n->params.max_active_zones != 0 && + ns->nr_active_zones + act > n->params.max_active_zones) { + trace_pci_nvme_err_insuff_active_res(n->params.max_active_zones); + return NVME_ZONE_TOO_MANY_ACTIVE | NVME_DNR; + } + if (n->params.max_open_zones != 0 && + ns->nr_open_zones + opn > n->params.max_open_zones) { + trace_pci_nvme_err_insuff_open_res(n->params.max_open_zones); + return NVME_ZONE_TOO_MANY_OPEN | NVME_DNR; + } + + return NVME_SUCCESS; +} + +static inline void nvme_aor_inc_open(NvmeCtrl *n, NvmeNamespace *ns) +{ + assert(ns->nr_open_zones >= 0); + if (n->params.max_open_zones) { + ns->nr_open_zones++; + assert(ns->nr_open_zones <= n->params.max_open_zones); + } +} + +static inline void nvme_aor_dec_open(NvmeCtrl *n, NvmeNamespace *ns) +{ + if (n->params.max_open_zones) { + assert(ns->nr_open_zones > 0); + ns->nr_open_zones--; + } + assert(ns->nr_open_zones >= 0); +} + +static inline void nvme_aor_inc_active(NvmeCtrl *n, NvmeNamespace *ns) +{ + assert(ns->nr_active_zones >= 0); + if (n->params.max_active_zones) { + ns->nr_active_zones++; + assert(ns->nr_active_zones <= n->params.max_active_zones); + } +} + +static inline void nvme_aor_dec_active(NvmeCtrl *n, NvmeNamespace *ns) +{ + if (n->params.max_active_zones) { + assert(ns->nr_active_zones > 0); + ns->nr_active_zones--; + assert(ns->nr_active_zones >= ns->nr_open_zones); + } + assert(ns->nr_active_zones >= 0); +} + static void nvme_assign_zone_state(NvmeCtrl *n, NvmeNamespace *ns, NvmeZone *zone, uint8_t state) { @@ -454,6 +535,24 @@ static void nvme_enqueue_req_completion(NvmeCQueue *cq, NvmeRequest *req) timer_mod(cq->timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + 500); } +static void nvme_auto_transition_zone(NvmeCtrl *n, NvmeNamespace *ns, + bool implicit, bool adding_active) +{ + NvmeZone *zone; + + if (implicit && n->params.max_open_zones && + ns->nr_open_zones == n->params.max_open_zones) { + zone = nvme_remove_zone_head(n, ns, ns->imp_open_zones); + if (zone) { + /* + * Automatically close this implicitly open zone. + */ + nvme_aor_dec_open(n, ns); + nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_CLOSED); + } + } +} + static uint16_t nvme_check_zone_write(NvmeZone *zone, uint64_t slba, uint32_t nlb) { @@ -531,6 +630,23 @@ static uint16_t nvme_check_zone_read(NvmeCtrl *n, NvmeZone *zone, uint64_t slba, return status; } +static uint16_t nvme_auto_open_zone(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone) +{ + uint16_t status = NVME_SUCCESS; + uint8_t zs = nvme_get_zone_state(zone); + + if (zs == NVME_ZONE_STATE_EMPTY) { + nvme_auto_transition_zone(n, ns, true, true); + status = nvme_aor_check(n, ns, 1, 1); + } else if (zs == NVME_ZONE_STATE_CLOSED) { + nvme_auto_transition_zone(n, ns, true, false); + status = nvme_aor_check(n, ns, 0, 1); + } + + return status; +} + static uint64_t nvme_finalize_zone_write(NvmeCtrl *n, NvmeNamespace *ns, NvmeZone *zone, uint32_t nlb) { @@ -543,7 +659,11 @@ static uint64_t nvme_finalize_zone_write(NvmeCtrl *n, NvmeNamespace *ns, switch (zs) { case NVME_ZONE_STATE_IMPLICITLY_OPEN: case NVME_ZONE_STATE_EXPLICITLY_OPEN: + nvme_aor_dec_open(n, ns); + /* fall through */ case NVME_ZONE_STATE_CLOSED: + nvme_aor_dec_active(n, ns); + /* fall through */ case NVME_ZONE_STATE_EMPTY: break; default: @@ -553,7 +673,10 @@ static uint64_t nvme_finalize_zone_write(NvmeCtrl *n, NvmeNamespace *ns, } else { switch (zs) { case NVME_ZONE_STATE_EMPTY: + nvme_aor_inc_active(n, ns); + /* fall through */ case NVME_ZONE_STATE_CLOSED: + nvme_aor_inc_open(n, ns); nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_IMPLICITLY_OPEN); } @@ -636,6 +759,11 @@ static uint16_t nvme_write_zeros(NvmeCtrl *n, NvmeNamespace *ns, NvmeCmd *cmd, zone->d.wp); return NVME_ZONE_INVALID_WRITE | NVME_DNR; } + + status = nvme_auto_open_zone(n, ns, zone); + if (status != NVME_SUCCESS) { + return status; + } } block_acct_start(blk_get_stats(n->conf.blk), &req->acct, 0, @@ -709,6 +837,11 @@ static uint16_t nvme_rw(NvmeCtrl *n, NvmeNamespace *ns, NvmeCmd *cmd, zone->d.wp); return NVME_ZONE_INVALID_WRITE | NVME_DNR; } + + status = nvme_auto_open_zone(n, ns, zone); + if (status != NVME_SUCCESS) { + return status; + } } else { status = nvme_check_zone_read(n, zone, slba, nlb, n->params.cross_zone_read); @@ -804,9 +937,27 @@ static uint16_t nvme_get_mgmt_zone_slba_idx(NvmeCtrl *n, NvmeNamespace *ns, static uint16_t nvme_open_zone(NvmeCtrl *n, NvmeNamespace *ns, NvmeZone *zone, uint8_t state) { + uint16_t status; + switch (state) { case NVME_ZONE_STATE_EMPTY: + nvme_auto_transition_zone(n, ns, false, true); + status = nvme_aor_check(n, ns, 1, 0); + if (status != NVME_SUCCESS) { + return status; + } + nvme_aor_inc_active(n, ns); + /* fall through */ case NVME_ZONE_STATE_CLOSED: + status = nvme_aor_check(n, ns, 0, 1); + if (status != NVME_SUCCESS) { + if (state == NVME_ZONE_STATE_EMPTY) { + nvme_aor_dec_active(n, ns); + } + return status; + } + nvme_aor_inc_open(n, ns); + /* fall through */ case NVME_ZONE_STATE_IMPLICITLY_OPEN: nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_EXPLICITLY_OPEN); /* fall through */ @@ -828,6 +979,7 @@ static uint16_t nvme_close_zone(NvmeCtrl *n, NvmeNamespace *ns, switch (state) { case NVME_ZONE_STATE_EXPLICITLY_OPEN: case NVME_ZONE_STATE_IMPLICITLY_OPEN: + nvme_aor_dec_open(n, ns); nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_CLOSED); /* fall through */ case NVME_ZONE_STATE_CLOSED: @@ -849,7 +1001,11 @@ static uint16_t nvme_finish_zone(NvmeCtrl *n, NvmeNamespace *ns, switch (state) { case NVME_ZONE_STATE_EXPLICITLY_OPEN: case NVME_ZONE_STATE_IMPLICITLY_OPEN: + nvme_aor_dec_open(n, ns); + /* fall through */ case NVME_ZONE_STATE_CLOSED: + nvme_aor_dec_active(n, ns); + /* fall through */ case NVME_ZONE_STATE_EMPTY: zone->d.wp = nvme_zone_wr_boundary(zone); nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_FULL); @@ -874,7 +1030,11 @@ static uint16_t nvme_reset_zone(NvmeCtrl *n, NvmeNamespace *ns, switch (state) { case NVME_ZONE_STATE_EXPLICITLY_OPEN: case NVME_ZONE_STATE_IMPLICITLY_OPEN: + nvme_aor_dec_open(n, ns); + /* fall through */ case NVME_ZONE_STATE_CLOSED: + nvme_aor_dec_active(n, ns); + /* fall through */ case NVME_ZONE_STATE_FULL: zone->d.wp = zone->d.zslba; nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_EMPTY); @@ -2412,6 +2572,15 @@ static void nvme_zoned_init_ctrl(NvmeCtrl *n, Error **errp) uint64_t zone_size = 0, capacity; uint32_t nz; + if (n->params.max_open_zones < 0) { + error_setg(errp, "invalid max_open_zones value"); + return; + } + if (n->params.max_active_zones < 0) { + error_setg(errp, "invalid max_active_zones value"); + return; + } + if (n->params.zone_size) { zone_size = n->params.zone_size; } else { @@ -2435,6 +2604,14 @@ static void nvme_zoned_init_ctrl(NvmeCtrl *n, Error **errp) n->num_zones = nz; n->zone_array_size = sizeof(NvmeZone) * nz; + /* Make sure that the values of all Zoned Command Set properties are sane */ + if (n->params.max_open_zones > nz) { + n->params.max_open_zones = nz; + } + if (n->params.max_active_zones > nz) { + n->params.max_active_zones = nz; + } + return; } @@ -2452,8 +2629,8 @@ static int nvme_zoned_init_ns(NvmeCtrl *n, NvmeNamespace *ns, int lba_index, ns->id_ns_zoned = g_malloc0(sizeof(*ns->id_ns_zoned)); /* MAR/MOR are zeroes-based, 0xffffffff means no limit */ - ns->id_ns_zoned->mar = 0xffffffff; - ns->id_ns_zoned->mor = 0xffffffff; + ns->id_ns_zoned->mar = cpu_to_le32(n->params.max_active_zones - 1); + ns->id_ns_zoned->mor = cpu_to_le32(n->params.max_open_zones - 1); ns->id_ns_zoned->zoc = 0; ns->id_ns_zoned->ozcs = n->params.cross_zone_read ? 0x01 : 0x00; @@ -2813,6 +2990,8 @@ static Property nvme_props[] = { DEFINE_PROP_UINT64("zone_size", NvmeCtrl, params.zone_size, 512), DEFINE_PROP_UINT64("zone_capacity", NvmeCtrl, params.zone_capacity, 512), DEFINE_PROP_UINT32("zone_append_max_size", NvmeCtrl, params.zamds_bs, 0), + DEFINE_PROP_INT32("max_active", NvmeCtrl, params.max_active_zones, 0), + DEFINE_PROP_INT32("max_open", NvmeCtrl, params.max_open_zones, 0), DEFINE_PROP_BOOL("cross_zone_read", NvmeCtrl, params.cross_zone_read, true), DEFINE_PROP_UINT8("fill_pattern", NvmeCtrl, params.fill_pattern, 0), DEFINE_PROP_END_OF_LIST(), diff --git a/hw/block/nvme.h b/hw/block/nvme.h index 2c932b5e29..f5a4679702 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -19,6 +19,8 @@ typedef struct NvmeParams { uint32_t zamds_bs; uint64_t zone_size; uint64_t zone_capacity; + int32_t max_active_zones; + int32_t max_open_zones; } NvmeParams; typedef struct NvmeAsyncEvent { @@ -103,6 +105,8 @@ typedef struct NvmeNamespace { NvmeZoneList *imp_open_zones; NvmeZoneList *closed_zones; NvmeZoneList *full_zones; + int32_t nr_open_zones; + int32_t nr_active_zones; } NvmeNamespace; static inline NvmeLBAF *nvme_ns_lbaf(NvmeNamespace *ns) From patchwork Wed Jun 17 21:34:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Fomichev X-Patchwork-Id: 11610757 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 90EAD90 for ; Wed, 17 Jun 2020 21:52:41 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 67DF82168B for ; Wed, 17 Jun 2020 21:52:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="jnrs2+Gf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 67DF82168B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:49178 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jlfz6-0002Jm-Ll for patchwork-qemu-devel@patchwork.kernel.org; Wed, 17 Jun 2020 17:52:40 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46228) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfiG-0004VA-TI; Wed, 17 Jun 2020 17:35:16 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:29885) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfiB-0005JY-Q0; Wed, 17 Jun 2020 17:35:16 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1592429711; x=1623965711; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/sSHcIhm5eWSXAnSAtJNBXblShOO29y8cAr3J7pSXMw=; b=jnrs2+GfZYmI1Jg0LoEXtlIs6PSGsMk/g8R/oyv7duocvlTiuSU+5+dS Rb0xL6h67IN9AtJCSYvR7jE1b+DfB8HUzYXIbU20l+fky2fWDPQHWZ0Ag TwBHZfazA2lVqj3v/4eKVKjeYTXUinmvrlUc7Uy4R8aNlNqPYqrqfTLpu ERRXPpLwrpSnH2ZKS57YY2ipW3e/RsiO24aShj9YAM6EmCXaRmBStd8En /QPT68aDf15LBMLSGJXX6rKXB6ax5Bo0LKP4Z3MImV1CO0ktpLJxMS/Dm DKp1i+JEeJm0aYDopTBVOdlTFmeiMU4+zETPuQR0il557frLXJOuhrnJK Q==; IronPort-SDR: lZl8vRBTvx66mX/iWSBQsc8GnhL4DVAgmqhwDMe7KhK+vRYrMe9s0sWxnyfc4euOOrhyB00xWo xAgBacLAiDh7PmDUdkjzOJ4ucAFKMQRLrv/QJwVGyGsEn6Ad2GWKbqsYCT1eB4L0k4DahQnDFM OPhhU4sHMAQjzfPTdJWpDWLe1wRkBEot4Hcg95I+sTtLHkvKAcEsVrOu7Ds+m3rfifPbJ6rf4B ONdXIF3UVAKLtvKvdP/02/miKpKcQiRiFOXtCoibBdgBRYPHFjbrRS4Vh9U/X6QvACU1FSTzjx nBA= X-IronPort-AV: E=Sophos;i="5.73,523,1583164800"; d="scan'208";a="249439830" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 18 Jun 2020 05:34:49 +0800 IronPort-SDR: PEa+FCbeO0tHQfV3+wJtdTLzhxcIqOViDxJUmDyOjn2KJraaRVz/q0tWwJS3JJ8q0th04awRlb QAm7P4vJiRECs7OCIEgRlgo0IG971j5TA= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 14:23:31 -0700 IronPort-SDR: W2J6aehr5zbeKew7QrLRUZe2nEe48EBK2+8hMwbwqfb6lPxRC/ck9J+Ij+lpEW3b6nqcWRRk+I k5oolLvYdHbA== WDCIronportException: Internal Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 17 Jun 2020 14:34:48 -0700 From: Dmitry Fomichev To: Kevin Wolf , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Maxim Levitsky Subject: [PATCH v2 12/18] hw/block/nvme: Simulate Zone Active excursions Date: Thu, 18 Jun 2020 06:34:09 +0900 Message-Id: <20200617213415.22417-13-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200617213415.22417-1-dmitry.fomichev@wdc.com> References: <20200617213415.22417-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=430b82a1d=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/17 17:34:28 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Matias Bjorling Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Added a Boolean flag to turn on simulation of Zone Active Excursions. If the flag, "active_excursions", is set to true, the driver will try to finish one of the currently open zone if max active zones limit is going to get exceeded. Signed-off-by: Dmitry Fomichev Reviewed-by: Alistair Francis --- hw/block/nvme.c | 24 +++++++++++++++++++++++- hw/block/nvme.h | 1 + 2 files changed, 24 insertions(+), 1 deletion(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 05a7cbcfcc..a29cbfcc96 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -540,6 +540,26 @@ static void nvme_auto_transition_zone(NvmeCtrl *n, NvmeNamespace *ns, { NvmeZone *zone; + if (n->params.active_excursions && adding_active && + n->params.max_active_zones && + ns->nr_active_zones == n->params.max_active_zones) { + zone = nvme_peek_zone_head(ns, ns->closed_zones); + if (zone) { + /* + * The namespace is at the limit of active zones. + * Try to finish one of the currently active zones + * to make the needed active zone resource available. + */ + nvme_aor_dec_active(n, ns); + nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_FULL); + zone->d.za &= ~(NVME_ZA_FINISH_RECOMMENDED | + NVME_ZA_RESET_RECOMMENDED); + zone->d.za |= NVME_ZA_FINISHED_BY_CTLR; + zone->tstamp = 0; + trace_pci_nvme_zone_finished_by_controller(zone->d.zslba); + } + } + if (implicit && n->params.max_open_zones && ns->nr_open_zones == n->params.max_open_zones) { zone = nvme_remove_zone_head(n, ns, ns->imp_open_zones); @@ -2631,7 +2651,7 @@ static int nvme_zoned_init_ns(NvmeCtrl *n, NvmeNamespace *ns, int lba_index, /* MAR/MOR are zeroes-based, 0xffffffff means no limit */ ns->id_ns_zoned->mar = cpu_to_le32(n->params.max_active_zones - 1); ns->id_ns_zoned->mor = cpu_to_le32(n->params.max_open_zones - 1); - ns->id_ns_zoned->zoc = 0; + ns->id_ns_zoned->zoc = cpu_to_le16(n->params.active_excursions ? 0x2 : 0); ns->id_ns_zoned->ozcs = n->params.cross_zone_read ? 0x01 : 0x00; ns->id_ns_zoned->lbafe[lba_index].zsze = cpu_to_le64(n->params.zone_size); @@ -2993,6 +3013,8 @@ static Property nvme_props[] = { DEFINE_PROP_INT32("max_active", NvmeCtrl, params.max_active_zones, 0), DEFINE_PROP_INT32("max_open", NvmeCtrl, params.max_open_zones, 0), DEFINE_PROP_BOOL("cross_zone_read", NvmeCtrl, params.cross_zone_read, true), + DEFINE_PROP_BOOL("active_excursions", NvmeCtrl, params.active_excursions, + false), DEFINE_PROP_UINT8("fill_pattern", NvmeCtrl, params.fill_pattern, 0), DEFINE_PROP_END_OF_LIST(), }; diff --git a/hw/block/nvme.h b/hw/block/nvme.h index f5a4679702..8a0aaeb09a 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -15,6 +15,7 @@ typedef struct NvmeParams { bool zoned; bool cross_zone_read; + bool active_excursions; uint8_t fill_pattern; uint32_t zamds_bs; uint64_t zone_size; From patchwork Wed Jun 17 21:34:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Fomichev X-Patchwork-Id: 11610747 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3F006912 for ; Wed, 17 Jun 2020 21:49:42 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 15F962168B for ; Wed, 17 Jun 2020 21:49:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="Ghq7WX7Y" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 15F962168B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:37762 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jlfwD-0005LX-6Z for patchwork-qemu-devel@patchwork.kernel.org; Wed, 17 Jun 2020 17:49:41 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46250) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfiJ-0004Y7-AB; Wed, 17 Jun 2020 17:35:19 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:29866) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfiE-0005I4-99; Wed, 17 Jun 2020 17:35:18 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1592429714; x=1623965714; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=rmi02F+uclU5nynYawpXrWkddeuvL3BZWHmUPifm9xI=; b=Ghq7WX7YiLGoY0808Qsewfepe6qf5FmZn6KR5Ed1gHOFRxKOdpWui54x 4Uk5KvGowa2Y94jMZjn9uUnyAplcgj8Crld8n/TVsr1meaYNhZhGgg2Vp doqR009TKbElaDdjVmQDcUMpcCaQcv2rN9LxHBeclzlIcb4auE+S9Sc2L oo4wqu9Q3lVjgRg4Mk965e8EuY7ptH+tJ+GN2rhv3cuntrshFNtm5aui6 dpjc1raS7v6VEEFf+7MLSbc6bFatt2rIjiiv4fWpt5T2/b5kW/Gmni2SM xMO3HvNq2SzGZjMt8WOHyERio3o3VOm9JwbRsk4BTKhb1xap/KSZe4i3t A==; IronPort-SDR: urrtczvvNqUSYgGGg4fwZ28pCehImY+DctLDeyuCez8kClc6D4N5lh+tlrhTIzAgtKgeHRmomj JzePp9JtVF3r8bgafjpnSLcvZwgyh9QCLjhwTYt3AqEoZDGuiOHKDoeOOL81AVYm6EAkVT6gTK uEu3mFGGHBlqVEdsW+sDSbvG52t416DjhCApKAZ89p1SyB1hciOQZ2DV/emTYq1pWi27syc2pO M/o64gUThNkIvf9tS4QjJmirn5OXBASsq7swMjaegfNRwFLTv6Dr4UyUqHRLAB1kbVuC65tUnx 6PU= X-IronPort-AV: E=Sophos;i="5.73,523,1583164800"; d="scan'208";a="249439834" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 18 Jun 2020 05:34:51 +0800 IronPort-SDR: sytCPdMjytxDDLnK1dxXyfYwSL661RbpXOPcNyuOy+9z/4MUXwS35Lr6gYrp586kAzYolq7dKl mW7mvwyYlNGvo0z+Fe+leKQlxnd2xbwDo= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 14:23:32 -0700 IronPort-SDR: R97+zqbGmitN2Bn/Kf/fWfZTL/uu8+f/hKUt/cSOjqc6xYmqoGnJLcgjVZ2QJc3M/A7hxK3IxE foVorlpZnZmg== WDCIronportException: Internal Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 17 Jun 2020 14:34:50 -0700 From: Dmitry Fomichev To: Kevin Wolf , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Maxim Levitsky Subject: [PATCH v2 13/18] hw/block/nvme: Set Finish/Reset Zone Recommended attributes Date: Thu, 18 Jun 2020 06:34:10 +0900 Message-Id: <20200617213415.22417-14-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200617213415.22417-1-dmitry.fomichev@wdc.com> References: <20200617213415.22417-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=430b82a1d=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/17 17:34:28 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Matias Bjorling Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Added logic to set and reset FZR and RZR zone attributes. Four new driver properties are added to control the timing of setting and resetting these attributes. FZR/RZR delay lasts from the zone operation and until when the corresponding zone attribute is set. FZR/RZR limits set the time period between setting FZR or RZR attribute and resetting it simulating the internal controller action on that zone. Signed-off-by: Dmitry Fomichev --- hw/block/nvme.c | 99 +++++++++++++++++++++++++++++++++++++++++++++++++ hw/block/nvme.h | 13 ++++++- 2 files changed, 111 insertions(+), 1 deletion(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index a29cbfcc96..c3898448c7 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -201,6 +201,84 @@ static inline void nvme_aor_dec_active(NvmeCtrl *n, NvmeNamespace *ns) assert(ns->nr_active_zones >= 0); } +static void nvme_set_rzr(NvmeCtrl *n, NvmeNamespace *ns, NvmeZone *zone) +{ + assert(zone->flags & NVME_ZFLAGS_SET_RZR); + zone->tstamp = qemu_clock_get_ns(QEMU_CLOCK_REALTIME); + zone->flags &= ~NVME_ZFLAGS_TS_DELAY; + zone->d.za |= NVME_ZA_RESET_RECOMMENDED; + zone->flags &= ~NVME_ZFLAGS_SET_RZR; + trace_pci_nvme_zone_reset_recommended(zone->d.zslba); +} + +static void nvme_clear_rzr(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone, bool notify) +{ + if (n->params.rrl_usec) { + zone->flags &= ~(NVME_ZFLAGS_SET_RZR | NVME_ZFLAGS_TS_DELAY); + notify = notify && (zone->d.za & NVME_ZA_RESET_RECOMMENDED); + zone->d.za &= ~NVME_ZA_RESET_RECOMMENDED; + zone->tstamp = 0; + } +} + +static void nvme_set_fzr(NvmeCtrl *n, NvmeNamespace *ns, NvmeZone *zone) +{ + assert(zone->flags & NVME_ZFLAGS_SET_FZR); + zone->tstamp = qemu_clock_get_ns(QEMU_CLOCK_REALTIME); + zone->flags &= ~NVME_ZFLAGS_TS_DELAY; + zone->d.za |= NVME_ZA_FINISH_RECOMMENDED; + zone->flags &= ~NVME_ZFLAGS_SET_FZR; + trace_pci_nvme_zone_finish_recommended(zone->d.zslba); +} + +static void nvme_clear_fzr(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone, bool notify) +{ + if (n->params.frl_usec) { + zone->flags &= ~(NVME_ZFLAGS_SET_FZR | NVME_ZFLAGS_TS_DELAY); + notify = notify && (zone->d.za & NVME_ZA_FINISH_RECOMMENDED); + zone->d.za &= ~NVME_ZA_FINISH_RECOMMENDED; + zone->tstamp = 0; + } +} + +static void nvme_schedule_rzr(NvmeCtrl *n, NvmeNamespace *ns, NvmeZone *zone) +{ + if (n->params.frl_usec) { + zone->flags &= ~(NVME_ZFLAGS_SET_FZR | NVME_ZFLAGS_TS_DELAY); + zone->d.za &= ~NVME_ZA_FINISH_RECOMMENDED; + zone->tstamp = 0; + } + if (n->params.rrl_usec) { + zone->flags |= NVME_ZFLAGS_SET_RZR; + if (n->params.rzr_delay_usec) { + zone->tstamp = qemu_clock_get_ns(QEMU_CLOCK_REALTIME); + zone->flags |= NVME_ZFLAGS_TS_DELAY; + } else { + nvme_set_rzr(n, ns, zone); + } + } +} + +static void nvme_schedule_fzr(NvmeCtrl *n, NvmeNamespace *ns, NvmeZone *zone) +{ + if (n->params.rrl_usec) { + zone->flags &= ~(NVME_ZFLAGS_SET_RZR | NVME_ZFLAGS_TS_DELAY); + zone->d.za &= ~NVME_ZA_RESET_RECOMMENDED; + zone->tstamp = 0; + } + if (n->params.frl_usec) { + zone->flags |= NVME_ZFLAGS_SET_FZR; + if (n->params.fzr_delay_usec) { + zone->tstamp = qemu_clock_get_ns(QEMU_CLOCK_REALTIME); + zone->flags |= NVME_ZFLAGS_TS_DELAY; + } else { + nvme_set_fzr(n, ns, zone); + } + } +} + static void nvme_assign_zone_state(NvmeCtrl *n, NvmeNamespace *ns, NvmeZone *zone, uint8_t state) { @@ -208,15 +286,19 @@ static void nvme_assign_zone_state(NvmeCtrl *n, NvmeNamespace *ns, switch (nvme_get_zone_state(zone)) { case NVME_ZONE_STATE_EXPLICITLY_OPEN: nvme_remove_zone(n, ns, ns->exp_open_zones, zone); + nvme_clear_fzr(n, ns, zone, false); break; case NVME_ZONE_STATE_IMPLICITLY_OPEN: nvme_remove_zone(n, ns, ns->imp_open_zones, zone); + nvme_clear_fzr(n, ns, zone, false); break; case NVME_ZONE_STATE_CLOSED: nvme_remove_zone(n, ns, ns->closed_zones, zone); + nvme_clear_fzr(n, ns, zone, false); break; case NVME_ZONE_STATE_FULL: nvme_remove_zone(n, ns, ns->full_zones, zone); + nvme_clear_rzr(n, ns, zone, false); } } @@ -225,15 +307,19 @@ static void nvme_assign_zone_state(NvmeCtrl *n, NvmeNamespace *ns, switch (state) { case NVME_ZONE_STATE_EXPLICITLY_OPEN: nvme_add_zone_tail(n, ns, ns->exp_open_zones, zone); + nvme_schedule_fzr(n, ns, zone); break; case NVME_ZONE_STATE_IMPLICITLY_OPEN: nvme_add_zone_tail(n, ns, ns->imp_open_zones, zone); + nvme_schedule_fzr(n, ns, zone); break; case NVME_ZONE_STATE_CLOSED: nvme_add_zone_tail(n, ns, ns->closed_zones, zone); + nvme_schedule_fzr(n, ns, zone); break; case NVME_ZONE_STATE_FULL: nvme_add_zone_tail(n, ns, ns->full_zones, zone); + nvme_schedule_rzr(n, ns, zone); break; default: zone->d.za = 0; @@ -555,6 +641,7 @@ static void nvme_auto_transition_zone(NvmeCtrl *n, NvmeNamespace *ns, zone->d.za &= ~(NVME_ZA_FINISH_RECOMMENDED | NVME_ZA_RESET_RECOMMENDED); zone->d.za |= NVME_ZA_FINISHED_BY_CTLR; + zone->flags = 0; zone->tstamp = 0; trace_pci_nvme_zone_finished_by_controller(zone->d.zslba); } @@ -2624,6 +2711,11 @@ static void nvme_zoned_init_ctrl(NvmeCtrl *n, Error **errp) n->num_zones = nz; n->zone_array_size = sizeof(NvmeZone) * nz; + n->params.rzr_delay_usec *= SCALE_MS; + n->params.rrl_usec *= SCALE_MS; + n->params.fzr_delay_usec *= SCALE_MS; + n->params.frl_usec *= SCALE_MS; + /* Make sure that the values of all Zoned Command Set properties are sane */ if (n->params.max_open_zones > nz) { n->params.max_open_zones = nz; @@ -2651,6 +2743,8 @@ static int nvme_zoned_init_ns(NvmeCtrl *n, NvmeNamespace *ns, int lba_index, /* MAR/MOR are zeroes-based, 0xffffffff means no limit */ ns->id_ns_zoned->mar = cpu_to_le32(n->params.max_active_zones - 1); ns->id_ns_zoned->mor = cpu_to_le32(n->params.max_open_zones - 1); + ns->id_ns_zoned->rrl = cpu_to_le32(n->params.rrl_usec / (1000 * SCALE_MS)); + ns->id_ns_zoned->frl = cpu_to_le32(n->params.frl_usec / (1000 * SCALE_MS)); ns->id_ns_zoned->zoc = cpu_to_le16(n->params.active_excursions ? 0x2 : 0); ns->id_ns_zoned->ozcs = n->params.cross_zone_read ? 0x01 : 0x00; @@ -3012,6 +3106,11 @@ static Property nvme_props[] = { DEFINE_PROP_UINT32("zone_append_max_size", NvmeCtrl, params.zamds_bs, 0), DEFINE_PROP_INT32("max_active", NvmeCtrl, params.max_active_zones, 0), DEFINE_PROP_INT32("max_open", NvmeCtrl, params.max_open_zones, 0), + DEFINE_PROP_UINT64("reset_rcmnd_delay", NvmeCtrl, params.rzr_delay_usec, 0), + DEFINE_PROP_UINT64("reset_rcmnd_limit", NvmeCtrl, params.rrl_usec, 0), + DEFINE_PROP_UINT64("finish_rcmnd_delay", NvmeCtrl, + params.fzr_delay_usec, 0), + DEFINE_PROP_UINT64("finish_rcmnd_limit", NvmeCtrl, params.frl_usec, 0), DEFINE_PROP_BOOL("cross_zone_read", NvmeCtrl, params.cross_zone_read, true), DEFINE_PROP_BOOL("active_excursions", NvmeCtrl, params.active_excursions, false), diff --git a/hw/block/nvme.h b/hw/block/nvme.h index 8a0aaeb09a..be1920f1ef 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -22,6 +22,10 @@ typedef struct NvmeParams { uint64_t zone_capacity; int32_t max_active_zones; int32_t max_open_zones; + uint64_t rzr_delay_usec; + uint64_t rrl_usec; + uint64_t fzr_delay_usec; + uint64_t frl_usec; } NvmeParams; typedef struct NvmeAsyncEvent { @@ -77,12 +81,19 @@ typedef struct NvmeCQueue { QTAILQ_HEAD(, NvmeRequest) req_list; } NvmeCQueue; +enum NvmeZoneFlags { + NVME_ZFLAGS_TS_DELAY = 1 << 0, + NVME_ZFLAGS_SET_RZR = 1 << 1, + NVME_ZFLAGS_SET_FZR = 1 << 2, +}; + typedef struct NvmeZone { NvmeZoneDescr d; uint64_t tstamp; + uint32_t flags; uint32_t next; uint32_t prev; - uint8_t rsvd80[8]; + uint8_t rsvd84[4]; } NvmeZone; #define NVME_ZONE_LIST_NIL UINT_MAX From patchwork Wed Jun 17 21:34:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Fomichev X-Patchwork-Id: 11610765 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AEDE513B1 for ; Wed, 17 Jun 2020 21:54:37 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 757D021556 for ; Wed, 17 Jun 2020 21:54:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="Wm5NMa0q" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 757D021556 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:59822 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jlg0y-0006Zl-NG for patchwork-qemu-devel@patchwork.kernel.org; Wed, 17 Jun 2020 17:54:36 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46310) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfiL-0004dT-TL; Wed, 17 Jun 2020 17:35:23 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:29879) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfiH-0005JL-3G; Wed, 17 Jun 2020 17:35:21 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1592429716; x=1623965716; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=skl5Ggpz/2UW9vgyw9GZYwp/nwbjt+vbdAv8t7e3dRo=; b=Wm5NMa0qipbY8kIWntBUW7xHjQYDgSOwYV6lma1m7/+zXhV5IWwFMtGj Fjp+n7qIooKWY+65/ztQrGQmaIvZoeV+1P7ELw/ZyoJ45Ck2dB1/rjPzL Qp+//q7XOdETquGc2RgPzn0mhXCxjSg2UG4DqUH0mB9uRZXfUXMHnG8Ht WLgMGAlS6k5zc7PUYpCumGKYfobkvYLRSfftn6PLmnsd1baQPzbYcFb54 5e43ZzOum1ZPpv4pv2Yu1z9rk3dP0bckauD8xKDq2UPNT45F+qnOi6GHT XGqLRKBOO+xDOcvw9glroNBThkbdTGFYNqm1iRyj2uJTFIjNPZjHTxN+p w==; IronPort-SDR: X/7nppUwsEn55ao5JXv/YIhM5NvD4z22o2cHAhVaMf0m8E4PIZVTI4rqFENCqPcENgooQmAbsL HDsCCwTgazADDeRQH2jalVfNb/QaBXW1gb40+Bn7F0JElPLUxGrf4S2xB9mdg2SCmFvWZws/56 01wUIQoAGYLfOqz3Jguzk+nuE6m+q1eqBQhq3ikPwkbcXSSWAyNYBqcPK5k2zwj9Zb7qO5KUtA rEgv35DQG5kFn6VSnhe25egw8zx+IJsnu0mWPG4Yy3QUPKbHfdpJb8Q1UTCNOEgkjAkdqRNeh1 cWA= X-IronPort-AV: E=Sophos;i="5.73,523,1583164800"; d="scan'208";a="249439837" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 18 Jun 2020 05:34:53 +0800 IronPort-SDR: 15XiLfKmJ0w7j+FnfKPB9rXVs2Bfie1s1f73HWBaNdRiqY525d1uCqKU7fbl0+b6paISZw+JnN VhXW8vlIHYY1H/pJKcGYF/7wxYG08o82Q= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 14:23:34 -0700 IronPort-SDR: eFoDU38SgqFJ4RrcjknkYKygZYVWkeAaJvRDq7BKD6kHl6h1Q9cilcHFF0MIgreQIlrlMpEJRs dJUnzNXQ+y5Q== WDCIronportException: Internal Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 17 Jun 2020 14:34:52 -0700 From: Dmitry Fomichev To: Kevin Wolf , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Maxim Levitsky Subject: [PATCH v2 14/18] hw/block/nvme: Generate zone AENs Date: Thu, 18 Jun 2020 06:34:11 +0900 Message-Id: <20200617213415.22417-15-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200617213415.22417-1-dmitry.fomichev@wdc.com> References: <20200617213415.22417-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=430b82a1d=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/17 17:34:28 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Matias Bjorling Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Added an optional Boolean "zone_async_events" property to the driver. Once it's turned on, the namespace will be sending "Zone Descriptor Changed" asynchronous events to the host in particular situations defined by the protocol. In order to clear these AENs, the host needs to read the newly added Changed Zones Log. Signed-off-by: Dmitry Fomichev --- hw/block/nvme.c | 300 ++++++++++++++++++++++++++++++++++++++++++- hw/block/nvme.h | 13 +- include/block/nvme.h | 23 +++- 3 files changed, 328 insertions(+), 8 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index c3898448c7..b9135a6b1f 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -201,12 +201,66 @@ static inline void nvme_aor_dec_active(NvmeCtrl *n, NvmeNamespace *ns) assert(ns->nr_active_zones >= 0); } +static bool nvme_complete_async_req(NvmeCtrl *n, NvmeNamespace *ns, + enum NvmeAsyncEventType type, uint8_t info) +{ + NvmeAsyncEvent *ae; + uint32_t nsid = 0; + uint8_t log_page = 0; + + switch (type) { + case NVME_AER_TYPE_ERROR: + case NVME_AER_TYPE_SMART: + break; + case NVME_AER_TYPE_NOTICE: + switch (info) { + case NVME_AER_NOTICE_ZONE_DESCR_CHANGED: + log_page = NVME_LOG_ZONE_CHANGED_LIST; + nsid = ns->nsid; + if (!(n->ae_cfg & NVME_AEN_CFG_ZONE_DESCR_CHNGD_NOTICES)) { + trace_pci_nvme_zone_ae_not_enabled(info, log_page, nsid); + return false; + } + if (ns->aen_pending) { + trace_pci_nvme_zone_ae_not_cleared(info, log_page, nsid); + return false; + } + ns->aen_pending = true; + } + break; + case NVME_AER_TYPE_CMDSET_SPECIFIC: + case NVME_AER_TYPE_VENDOR_SPECIFIC: + break; + } + + ae = g_malloc0(sizeof(*ae)); + ae->res = type; + ae->res |= (info << 8) & 0xff00; + ae->res |= (log_page << 16) & 0xff0000; + ae->nsid = nsid; + + QTAILQ_INSERT_TAIL(&n->async_reqs, ae, entry); + timer_mod(n->admin_cq.timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + 500); + return true; +} + +static inline void nvme_notify_zone_changed(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone) +{ + if (n->ae_cfg) { + zone->flags |= NVME_ZFLAGS_AEN_PEND; + nvme_complete_async_req(n, ns, NVME_AER_TYPE_NOTICE, + NVME_AER_NOTICE_ZONE_DESCR_CHANGED); + } +} + static void nvme_set_rzr(NvmeCtrl *n, NvmeNamespace *ns, NvmeZone *zone) { assert(zone->flags & NVME_ZFLAGS_SET_RZR); zone->tstamp = qemu_clock_get_ns(QEMU_CLOCK_REALTIME); zone->flags &= ~NVME_ZFLAGS_TS_DELAY; zone->d.za |= NVME_ZA_RESET_RECOMMENDED; + nvme_notify_zone_changed(n, ns, zone); zone->flags &= ~NVME_ZFLAGS_SET_RZR; trace_pci_nvme_zone_reset_recommended(zone->d.zslba); } @@ -215,10 +269,14 @@ static void nvme_clear_rzr(NvmeCtrl *n, NvmeNamespace *ns, NvmeZone *zone, bool notify) { if (n->params.rrl_usec) { - zone->flags &= ~(NVME_ZFLAGS_SET_RZR | NVME_ZFLAGS_TS_DELAY); + zone->flags &= ~(NVME_ZFLAGS_SET_RZR | NVME_ZFLAGS_TS_DELAY | + NVME_ZFLAGS_AEN_PEND); notify = notify && (zone->d.za & NVME_ZA_RESET_RECOMMENDED); zone->d.za &= ~NVME_ZA_RESET_RECOMMENDED; zone->tstamp = 0; + if (notify) { + nvme_notify_zone_changed(n, ns, zone); + } } } @@ -228,6 +286,7 @@ static void nvme_set_fzr(NvmeCtrl *n, NvmeNamespace *ns, NvmeZone *zone) zone->tstamp = qemu_clock_get_ns(QEMU_CLOCK_REALTIME); zone->flags &= ~NVME_ZFLAGS_TS_DELAY; zone->d.za |= NVME_ZA_FINISH_RECOMMENDED; + nvme_notify_zone_changed(n, ns, zone); zone->flags &= ~NVME_ZFLAGS_SET_FZR; trace_pci_nvme_zone_finish_recommended(zone->d.zslba); } @@ -236,13 +295,61 @@ static void nvme_clear_fzr(NvmeCtrl *n, NvmeNamespace *ns, NvmeZone *zone, bool notify) { if (n->params.frl_usec) { - zone->flags &= ~(NVME_ZFLAGS_SET_FZR | NVME_ZFLAGS_TS_DELAY); + zone->flags &= ~(NVME_ZFLAGS_SET_FZR | NVME_ZFLAGS_TS_DELAY | + NVME_ZFLAGS_AEN_PEND); notify = notify && (zone->d.za & NVME_ZA_FINISH_RECOMMENDED); zone->d.za &= ~NVME_ZA_FINISH_RECOMMENDED; zone->tstamp = 0; + if (notify) { + nvme_notify_zone_changed(n, ns, zone); + } } } +static bool nvme_process_rrl(NvmeCtrl *n, NvmeNamespace *ns, NvmeZone *zone) +{ + if (zone->flags & NVME_ZFLAGS_SET_RZR) { + if (zone->flags & NVME_ZFLAGS_TS_DELAY) { + assert(!(zone->d.za & NVME_ZA_RESET_RECOMMENDED)); + if (qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - zone->tstamp >= + n->params.rzr_delay_usec) { + nvme_set_rzr(n, ns, zone); + return true; + } + } else if (qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - zone->tstamp >= + n->params.rrl_usec) { + assert(zone->d.za & NVME_ZA_RESET_RECOMMENDED); + nvme_clear_rzr(n, ns, zone, true); + trace_pci_nvme_zone_reset_internal_op(zone->d.zslba); + return true; + } + } + + return false; +} + +static bool nvme_process_frl(NvmeCtrl *n, NvmeNamespace *ns, NvmeZone *zone) +{ + if (zone->flags & NVME_ZFLAGS_SET_FZR) { + if (zone->flags & NVME_ZFLAGS_TS_DELAY) { + assert(!(zone->d.za & NVME_ZA_FINISH_RECOMMENDED)); + if (qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - zone->tstamp >= + n->params.fzr_delay_usec) { + nvme_set_fzr(n, ns, zone); + return true; + } + } else if (qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - zone->tstamp >= + n->params.frl_usec) { + assert(zone->d.za & NVME_ZA_FINISH_RECOMMENDED); + nvme_clear_fzr(n, ns, zone, true); + trace_pci_nvme_zone_finish_internal_op(zone->d.zslba); + return true; + } + } + + return false; +} + static void nvme_schedule_rzr(NvmeCtrl *n, NvmeNamespace *ns, NvmeZone *zone) { if (n->params.frl_usec) { @@ -279,6 +386,48 @@ static void nvme_schedule_fzr(NvmeCtrl *n, NvmeNamespace *ns, NvmeZone *zone) } } +static void nvme_observe_ns_zone_time_limits(NvmeCtrl *n, NvmeNamespace *ns) +{ + NvmeZone *zone; + + if (n->params.frl_usec) { + for (zone = nvme_peek_zone_head(ns, ns->closed_zones); + zone; + zone = nvme_next_zone_in_list(ns, zone, ns->closed_zones)) { + nvme_process_frl(n, ns, zone); + } + + for (zone = nvme_peek_zone_head(ns, ns->imp_open_zones); + zone; + zone = nvme_next_zone_in_list(ns, zone, ns->imp_open_zones)) { + nvme_process_frl(n, ns, zone); + } + + for (zone = nvme_peek_zone_head(ns, ns->exp_open_zones); + zone; + zone = nvme_next_zone_in_list(ns, zone, ns->exp_open_zones)) { + nvme_process_frl(n, ns, zone); + } + } + + if (n->params.rrl_usec) { + for (zone = nvme_peek_zone_head(ns, ns->full_zones); + zone; + zone = nvme_next_zone_in_list(ns, zone, ns->full_zones)) { + nvme_process_rrl(n, ns, zone); + } + } +} + +static void nvme_observe_zone_time_limits(NvmeCtrl *n) +{ + int i; + + for (i = 0; i < n->num_namespaces; i++) { + nvme_observe_ns_zone_time_limits(n, &n->namespaces[i]); + } +} + static void nvme_assign_zone_state(NvmeCtrl *n, NvmeNamespace *ns, NvmeZone *zone, uint8_t state) { @@ -563,6 +712,7 @@ static void nvme_post_cqes(void *opaque) NvmeCQueue *cq = opaque; NvmeCtrl *n = cq->ctrl; NvmeRequest *req, *next; + NvmeAsyncEvent *ae; QTAILQ_FOREACH_SAFE(req, &cq->req_list, entry, next) { NvmeSQueue *sq; @@ -572,8 +722,26 @@ static void nvme_post_cqes(void *opaque) break; } + ae = NULL; + if (req->flags & NVME_REQ_FLG_AER) { + if (likely(QTAILQ_EMPTY(&n->async_reqs))) { + continue; + } else { + ae = QTAILQ_FIRST(&n->async_reqs); + QTAILQ_REMOVE(&n->async_reqs, ae, entry); + } + } + QTAILQ_REMOVE(&cq->req_list, req, entry); sq = req->sq; + if (unlikely(ae)) { + assert(!sq->sqid); + req->cqe.ae.info = cpu_to_le32(ae->res); + req->cqe.ae.nsid = cpu_to_le32(ae->nsid); + g_free(ae); + assert(n->nr_aers); + n->nr_aers--; + } req->cqe.status = cpu_to_le16((req->status << 1) | cq->phase); req->cqe.sq_id = cpu_to_le16(sq->sqid); @@ -587,6 +755,15 @@ static void nvme_post_cqes(void *opaque) if (cq->tail != cq->head) { nvme_irq_assert(n, cq); } + + if (cq == &n->admin_cq && + n->params.zoned && n->params.zone_async_events) { + nvme_observe_zone_time_limits(n); + if (timer_expired(cq->timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL))) { + timer_mod(cq->timer, + qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + 10 * SCALE_MS); + } + } } static void nvme_fill_data(QEMUSGList *qsg, QEMUIOVector *iov, @@ -618,7 +795,9 @@ static void nvme_enqueue_req_completion(NvmeCQueue *cq, NvmeRequest *req) assert(cq->cqid == req->sq->cqid); QTAILQ_REMOVE(&req->sq->out_req_list, req, entry); QTAILQ_INSERT_TAIL(&cq->req_list, req, entry); - timer_mod(cq->timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + 500); + if (!(req->flags & NVME_REQ_FLG_AER)) { + timer_mod(cq->timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + 500); + } } static void nvme_auto_transition_zone(NvmeCtrl *n, NvmeNamespace *ns, @@ -643,6 +822,7 @@ static void nvme_auto_transition_zone(NvmeCtrl *n, NvmeNamespace *ns, zone->d.za |= NVME_ZA_FINISHED_BY_CTLR; zone->flags = 0; zone->tstamp = 0; + nvme_notify_zone_changed(n, ns, zone); trace_pci_nvme_zone_finished_by_controller(zone->d.zslba); } } @@ -1978,6 +2158,10 @@ static uint16_t nvme_get_feature(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) break; case NVME_TIMESTAMP: return nvme_get_feature_timestamp(n, cmd); + case NVME_ASYNCHRONOUS_EVENT_CONF: + result = cpu_to_le32(n->ae_cfg); + trace_pci_nvme_getfeat_aen_cfg(result); + break; case NVME_COMMAND_SET_PROFILE: result = 0; break; @@ -2029,6 +2213,19 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) return nvme_set_feature_timestamp(n, cmd); break; + case NVME_ASYNCHRONOUS_EVENT_CONF: + if (dw11 & NVME_AEN_CFG_ZONE_DESCR_CHNGD_NOTICES) { + if (!(n->ae_cfg & NVME_AEN_CFG_ZONE_DESCR_CHNGD_NOTICES)) { + trace_pci_nvme_zone_aen_not_requested(dw11); + } else { + trace_pci_nvme_setfeat_zone_info_aer_on(); + } + } else if (n->ae_cfg & NVME_AEN_CFG_ZONE_DESCR_CHNGD_NOTICES) { + trace_pci_nvme_setfeat_zone_info_aer_off(); + n->ae_cfg &= ~NVME_AEN_CFG_ZONE_DESCR_CHNGD_NOTICES; + } + break; + case NVME_COMMAND_SET_PROFILE: if (dw11 & 0x1ff) { trace_pci_nvme_err_invalid_iocsci(dw11 & 0x1ff); @@ -2043,6 +2240,18 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) return NVME_SUCCESS; } +static uint16_t nvme_async_req(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) +{ + if (n->nr_aers >= NVME_MAX_ASYNC_EVENTS) { + return NVME_AER_LIMIT_EXCEEDED | NVME_DNR; + } + + assert(!(req->flags & NVME_REQ_FLG_AER)); + req->flags |= NVME_REQ_FLG_AER; + n->nr_aers++; + return NVME_SUCCESS; +} + static uint16_t nvme_handle_cmd_effects(NvmeCtrl *n, NvmeCmd *cmd, uint64_t prp1, uint64_t prp2, uint64_t ofs, uint32_t len, uint8_t csi) { @@ -2068,6 +2277,7 @@ static uint16_t nvme_handle_cmd_effects(NvmeCtrl *n, NvmeCmd *cmd, iocs[NVME_ADM_CMD_SET_FEATURES] = NVME_CMD_EFFECTS_CSUPP; iocs[NVME_ADM_CMD_GET_FEATURES] = NVME_CMD_EFFECTS_CSUPP; iocs[NVME_ADM_CMD_GET_LOG_PAGE] = NVME_CMD_EFFECTS_CSUPP; + iocs[NVME_ADM_CMD_ASYNC_EV_REQ] = NVME_CMD_EFFECTS_CSUPP; if (NVME_CC_CSS(n->bar.cc) != CSS_ADMIN_ONLY) { iocs[NVME_CMD_FLUSH] = NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC; @@ -2086,6 +2296,67 @@ static uint16_t nvme_handle_cmd_effects(NvmeCtrl *n, NvmeCmd *cmd, return nvme_dma_read_prp(n, (uint8_t *)&cmd_eff_log, len, prp1, prp2); } +static uint16_t nvme_handle_changed_zone_log(NvmeCtrl *n, NvmeCmd *cmd, + uint64_t prp1, uint64_t prp2, uint16_t nsid, uint64_t ofs, uint32_t len, + uint8_t csi, bool rae) +{ + NvmeNamespace *ns; + NvmeChangedZoneLog zc_log = {}; + NvmeZone *zone; + uint64_t *zid_ptr = &zc_log.zone_ids[0]; + uint64_t *zid_end = zid_ptr + ARRAY_SIZE(zc_log.zone_ids); + int i, nids = 0, num_aen_zones = 0; + + trace_pci_nvme_changed_zone_log_read(nsid); + + if (!n->params.zoned || !n->params.zone_async_events) { + return NVME_INVALID_FIELD | NVME_DNR; + } + + if (unlikely(nsid == 0 || nsid > n->num_namespaces)) { + trace_pci_nvme_err_invalid_ns(nsid, n->num_namespaces); + return NVME_INVALID_FIELD | NVME_DNR; + } + ns = &n->namespaces[nsid - 1]; + if (csi != ns->csi) { + return NVME_INVALID_FIELD | NVME_DNR; + } + + if (ofs != 0) { + trace_pci_nvme_err_invalid_changed_zone_list_offset(ofs); + return NVME_INVALID_FIELD | NVME_DNR; + } + if (len != sizeof(zc_log)) { + trace_pci_nvme_err_invalid_changed_zone_list_len(len); + return NVME_INVALID_FIELD | NVME_DNR; + } + + zone = ns->zone_array; + for (i = 0; i < n->num_zones && zid_ptr < zid_end; i++, zone++) { + if (!(zone->flags & NVME_ZFLAGS_AEN_PEND)) { + continue; + } + num_aen_zones++; + if (zone->d.za) { + trace_pci_nvme_reporting_changed_zone(zone->d.zslba, zone->d.za); + *zid_ptr++ = cpu_to_le64(zone->d.zslba); + nids++; + } + if (!rae) { + zone->flags &= ~NVME_ZFLAGS_AEN_PEND; + } + } + + if (num_aen_zones && !nids) { + trace_pci_nvme_empty_changed_zone_list(); + nids = 0xffff; + } + zc_log.nr_zone_ids = cpu_to_le16(nids); + ns->aen_pending = false; + + return nvme_dma_read_prp(n, (uint8_t *)&zc_log, len, prp1, prp2); +} + static uint16_t nvme_get_log_page(NvmeCtrl *n, NvmeCmd *cmd) { uint64_t prp1 = le64_to_cpu(cmd->prp1); @@ -2095,9 +2366,11 @@ static uint16_t nvme_get_log_page(NvmeCtrl *n, NvmeCmd *cmd) uint64_t dw12 = le32_to_cpu(cmd->cdw12); uint64_t dw13 = le32_to_cpu(cmd->cdw13); uint64_t ofs = (dw13 << 32) | dw12; + uint32_t nsid = le32_to_cpu(cmd->nsid); uint32_t numdl, numdu, len; uint16_t lid = dw10 & 0xff; uint8_t csi = le32_to_cpu(cmd->cdw14) >> 24; + bool rae = !!(dw10 & (1 << 15)); numdl = dw10 >> 16; numdu = dw11 & 0xffff; @@ -2106,6 +2379,9 @@ static uint16_t nvme_get_log_page(NvmeCtrl *n, NvmeCmd *cmd) switch (lid) { case NVME_LOG_CMD_EFFECTS: return nvme_handle_cmd_effects(n, cmd, prp1, prp2, ofs, len, csi); + case NVME_LOG_ZONE_CHANGED_LIST: + return nvme_handle_changed_zone_log(n, cmd, prp1, prp2, nsid, + ofs, len, csi, rae); } trace_pci_nvme_unsupported_log_page(lid); @@ -2131,6 +2407,8 @@ static uint16_t nvme_admin_cmd(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) return nvme_get_feature(n, cmd, req); case NVME_ADM_CMD_GET_LOG_PAGE: return nvme_get_log_page(n, cmd); + case NVME_ADM_CMD_ASYNC_EV_REQ: + return nvme_async_req(n, cmd, req); default: trace_pci_nvme_err_invalid_admin_opc(cmd->opcode); return NVME_INVALID_OPCODE | NVME_DNR; @@ -2171,6 +2449,7 @@ static void nvme_process_sq(void *opaque) static void nvme_clear_ctrl(NvmeCtrl *n) { + NvmeAsyncEvent *ae_entry, *next; int i; blk_drain(n->conf.blk); @@ -2186,6 +2465,11 @@ static void nvme_clear_ctrl(NvmeCtrl *n) } } + QTAILQ_FOREACH_SAFE(ae_entry, &n->async_reqs, entry, next) { + g_free(ae_entry); + } + n->nr_aers = 0; + blk_flush(n->conf.blk); n->bar.cc = 0; } @@ -2290,6 +2574,9 @@ static int nvme_start_ctrl(NvmeCtrl *n) nvme_set_timestamp(n, 0ULL); + QTAILQ_INIT(&n->async_reqs); + n->nr_aers = 0; + return 0; } @@ -2724,6 +3011,10 @@ static void nvme_zoned_init_ctrl(NvmeCtrl *n, Error **errp) n->params.max_active_zones = nz; } + if (n->params.zone_async_events) { + n->ae_cfg |= NVME_AEN_CFG_ZONE_DESCR_CHNGD_NOTICES; + } + return; } @@ -2993,6 +3284,7 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pci_dev) id->ieee[1] = 0x02; id->ieee[2] = 0xb3; id->oacs = cpu_to_le16(0); + id->oaes = cpu_to_le32(n->ae_cfg); id->frmw = 7 << 1; id->lpa = 1 << 1; id->sqes = (0x6 << 4) | 0x6; @@ -3111,6 +3403,8 @@ static Property nvme_props[] = { DEFINE_PROP_UINT64("finish_rcmnd_delay", NvmeCtrl, params.fzr_delay_usec, 0), DEFINE_PROP_UINT64("finish_rcmnd_limit", NvmeCtrl, params.frl_usec, 0), + DEFINE_PROP_BOOL("zone_async_events", NvmeCtrl, params.zone_async_events, + true), DEFINE_PROP_BOOL("cross_zone_read", NvmeCtrl, params.cross_zone_read, true), DEFINE_PROP_BOOL("active_excursions", NvmeCtrl, params.active_excursions, false), diff --git a/hw/block/nvme.h b/hw/block/nvme.h index be1920f1ef..e63f7736d7 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -3,6 +3,7 @@ #include "block/nvme.h" +#define NVME_MAX_ASYNC_EVENTS 16 #define NVME_DEFAULT_ZONE_SIZE 128 /* MiB */ #define NVME_DEFAULT_MAX_ZA_SIZE 128 /* KiB */ @@ -15,6 +16,7 @@ typedef struct NvmeParams { bool zoned; bool cross_zone_read; + bool zone_async_events; bool active_excursions; uint8_t fill_pattern; uint32_t zamds_bs; @@ -29,13 +31,16 @@ typedef struct NvmeParams { } NvmeParams; typedef struct NvmeAsyncEvent { - QSIMPLEQ_ENTRY(NvmeAsyncEvent) entry; + QTAILQ_ENTRY(NvmeAsyncEvent) entry; + uint32_t res; + uint32_t nsid; } NvmeAsyncEvent; enum NvmeRequestFlags { NVME_REQ_FLG_HAS_SG = 1 << 0, NVME_REQ_FLG_FILL = 1 << 1, NVME_REQ_FLG_APPEND = 1 << 2, + NVME_REQ_FLG_AER = 1 << 3, }; typedef struct NvmeRequest { @@ -85,6 +90,7 @@ enum NvmeZoneFlags { NVME_ZFLAGS_TS_DELAY = 1 << 0, NVME_ZFLAGS_SET_RZR = 1 << 1, NVME_ZFLAGS_SET_FZR = 1 << 2, + NVME_ZFLAGS_AEN_PEND = 1 << 3, }; typedef struct NvmeZone { @@ -119,6 +125,7 @@ typedef struct NvmeNamespace { NvmeZoneList *full_zones; int32_t nr_open_zones; int32_t nr_active_zones; + bool aen_pending; } NvmeNamespace; static inline NvmeLBAF *nvme_ns_lbaf(NvmeNamespace *ns) @@ -173,6 +180,10 @@ typedef struct NvmeCtrl { NvmeSQueue admin_sq; NvmeCQueue admin_cq; NvmeIdCtrl id_ctrl; + + QTAILQ_HEAD(, NvmeAsyncEvent) async_reqs; + uint32_t nr_aers; + uint32_t ae_cfg; } NvmeCtrl; /* calculate the number of LBAs that the namespace can accomodate */ diff --git a/include/block/nvme.h b/include/block/nvme.h index 596c39162b..e06fb97337 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -633,16 +633,22 @@ enum NvmeAsyncErrorInfo { enum NvmeAsyncNoticeInfo { NVME_AER_NOTICE_NS_CHANGED = 0x00, + NVME_AER_NOTICE_ZONE_DESCR_CHANGED = 0xef, }; enum NvmeAsyncEventCfg { NVME_AEN_CFG_NS_ATTR = 1 << 8, + NVME_AEN_CFG_ZONE_DESCR_CHNGD_NOTICES = 1 << 27, }; typedef struct NvmeCqe { union { uint64_t result64; uint32_t result32; + struct { + uint32_t info; + uint32_t nsid; + } ae; }; uint16_t sq_head; uint16_t sq_id; @@ -778,11 +784,19 @@ enum { NVME_CMD_EFFECTS_UUID_SEL = 1 << 19, }; +typedef struct NvmeChangedZoneLog { + uint16_t nr_zone_ids; + uint8_t rsvd2[6]; + uint64_t zone_ids[511]; +} NvmeChangedZoneLog; + enum LogIdentifier { - NVME_LOG_ERROR_INFO = 0x01, - NVME_LOG_SMART_INFO = 0x02, - NVME_LOG_FW_SLOT_INFO = 0x03, - NVME_LOG_CMD_EFFECTS = 0x05, + NVME_LOG_ERROR_INFO = 0x01, + NVME_LOG_SMART_INFO = 0x02, + NVME_LOG_FW_SLOT_INFO = 0x03, + NVME_LOG_CHANGED_NS_LIST = 0x04, + NVME_LOG_CMD_EFFECTS = 0x05, + NVME_LOG_ZONE_CHANGED_LIST = 0xbf, }; typedef struct NvmePSD { @@ -1097,6 +1111,7 @@ static inline void _nvme_check_size(void) QEMU_BUILD_BUG_ON(sizeof(NvmeIdNs) != 4096); QEMU_BUILD_BUG_ON(sizeof(NvmeIdNsZoned) != 4096); QEMU_BUILD_BUG_ON(sizeof(NvmeEffectsLog) != 4096); + QEMU_BUILD_BUG_ON(sizeof(NvmeChangedZoneLog) != 4096); QEMU_BUILD_BUG_ON(sizeof(NvmeZoneDescr) != 64); } #endif From patchwork Wed Jun 17 21:34:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Fomichev X-Patchwork-Id: 11610755 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0FC0314E3 for ; Wed, 17 Jun 2020 21:51:56 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DBA8921548 for ; Wed, 17 Jun 2020 21:51:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="JaJezOhI" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DBA8921548 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:46374 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jlfyM-0000iZ-Ud for patchwork-qemu-devel@patchwork.kernel.org; Wed, 17 Jun 2020 17:51:54 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46312) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfiN-0004dc-2E; Wed, 17 Jun 2020 17:35:23 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:29885) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfiJ-0005JY-Cq; Wed, 17 Jun 2020 17:35:22 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1592429719; x=1623965719; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=u2r9SyxJDxOigqOynmlVPCtDA74dVEj+5oloZa8fMbA=; b=JaJezOhIcYZT3CQeTa49kKlWKC2KtuZ4gIx3CS9iTrK868j0ELx1qkWr yCApNwxlOhzahuNx0VQwCi/IN2QTWsPv6FBWmOBiGqRAjwv9WmUIv6s8J BmbuotYxV/ZMDv9nOv0w+zcZX87S5xHf8JhNQSRvEtePAoSjsrJEyb6U4 KVHnWnJV90SNZel/Pw2ygyESWtUlIl/QHh3GxUNrzp02kdFbQfEdnuji8 11Pfe3YVu03eqUPxWHdBmB6cRr5Wyg5pAzmKWsGfC16/xiNF2dxvHIFQI UFrYg98Tryo9o7zpdQJPk+XzQh1NgGVxtG31qmQXsDoKFeOu1rX4KYb5e w==; IronPort-SDR: icK2opN1UtHgyS9ECwyjX4jbWZ0Gca5coGvXHBZij5Myt6Oso/PoRDrs/fd/cTIA/lJQQ80r2u MUUm8QyvEEHVHGLM/G2vZHDH80isDd+iwue0paqj+r0Ij2X1eGOyDAQc5aCeedMiRylLNkJSRB EPUTPXVWl+KWovKOp2/CgW5fEJwinwjwOru0/71PPI8sJEQFoawUUp+fX+mxHoSITUUbKR/LmB jxQ2dWxh4bB+K5qq3pZNwkr343E6GG79wV20PSsqOKYF3qT+JZ4VbNOHftTUN2M9NQQqRgVLW5 +Ng= X-IronPort-AV: E=Sophos;i="5.73,523,1583164800"; d="scan'208";a="249439839" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 18 Jun 2020 05:34:55 +0800 IronPort-SDR: YYeeqblnZcT9oImlo9aPC9Ob/X8Awx4AHNM4LgywaZnf7UErv220NCFDjGfsTnwBhynG5WGLPi 34BP34e8sAfC46o4cL+0gVobQLPeLbph8= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 14:23:36 -0700 IronPort-SDR: FWlWQaUJfrVauzI7k/WkYeEqBzPlnQEeiVOwB6bDSPLC8zrUKJS3xRThjzeTFozRsboosA+W0J yJsLOtfgnYHw== WDCIronportException: Internal Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 17 Jun 2020 14:34:54 -0700 From: Dmitry Fomichev To: Kevin Wolf , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Maxim Levitsky Subject: [PATCH v2 15/18] hw/block/nvme: Support Zone Descriptor Extensions Date: Thu, 18 Jun 2020 06:34:12 +0900 Message-Id: <20200617213415.22417-16-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200617213415.22417-1-dmitry.fomichev@wdc.com> References: <20200617213415.22417-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=430b82a1d=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/17 17:34:28 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Matias Bjorling Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Zone Descriptor Extension is a label that can be assigned to a zone. It can be set to an Empty zone and it stays assigned until the zone is reset. This commit adds a new optional property, "zone_descr_ext_size", to the driver. Its value must be a multiple of 64 bytes. If this value is non-zero, it becomes possible to assign extensions of that size to any Empty zones. The default value for this property is 0, therefore setting extensions is disabled by default. Signed-off-by: Hans Holmberg Signed-off-by: Dmitry Fomichev Reviewed-by: Klaus Jensen --- hw/block/nvme.c | 76 ++++++++++++++++++++++++++++++++++++++++++++++--- hw/block/nvme.h | 8 ++++++ 2 files changed, 80 insertions(+), 4 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index b9135a6b1f..eb41081627 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -1360,6 +1360,26 @@ static bool nvme_cond_offline_all(uint8_t state) return state == NVME_ZONE_STATE_READ_ONLY; } +static uint16_t nvme_set_zd_ext(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone, uint8_t state) +{ + uint16_t status; + + if (state == NVME_ZONE_STATE_EMPTY) { + nvme_auto_transition_zone(n, ns, false, true); + status = nvme_aor_check(n, ns, 1, 0); + if (status != NVME_SUCCESS) { + return status; + } + nvme_aor_inc_active(n, ns); + zone->d.za |= NVME_ZA_ZD_EXT_VALID; + nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_CLOSED); + return NVME_SUCCESS; + } + + return NVME_ZONE_INVAL_TRANSITION; +} + static uint16_t name_do_zone_op(NvmeCtrl *n, NvmeNamespace *ns, NvmeZone *zone, uint8_t state, bool all, uint16_t (*op_hndlr)(NvmeCtrl *, NvmeNamespace *, NvmeZone *, @@ -1388,13 +1408,16 @@ static uint16_t name_do_zone_op(NvmeCtrl *n, NvmeNamespace *ns, static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, NvmeNamespace *ns, NvmeCmd *cmd, NvmeRequest *req) { + NvmeRwCmd *rw; uint32_t dw13 = le32_to_cpu(cmd->cdw13); + uint64_t prp1, prp2; uint64_t slba = 0; uint64_t zone_idx = 0; uint16_t status; uint8_t action, state; bool all; NvmeZone *zone; + uint8_t *zd_ext; action = dw13 & 0xff; all = dw13 & 0x100; @@ -1449,7 +1472,25 @@ static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, NvmeNamespace *ns, case NVME_ZONE_ACTION_SET_ZD_EXT: trace_pci_nvme_set_descriptor_extension(slba, zone_idx); - return NVME_INVALID_FIELD | NVME_DNR; + if (all || !n->params.zd_extension_size) { + return NVME_INVALID_FIELD | NVME_DNR; + } + zd_ext = nvme_get_zd_extension(n, ns, zone_idx); + rw = (NvmeRwCmd *)cmd; + prp1 = le64_to_cpu(rw->prp1); + prp2 = le64_to_cpu(rw->prp2); + status = nvme_dma_write_prp(n, zd_ext, n->params.zd_extension_size, + prp1, prp2); + if (status) { + trace_pci_nvme_err_zd_extension_map_error(zone_idx); + return status; + } + + status = nvme_set_zd_ext(n, ns, zone, state); + if (status == NVME_SUCCESS) { + trace_pci_nvme_zd_extension_set(zone_idx); + return status; + } break; default: @@ -1528,7 +1569,7 @@ static uint16_t nvme_zone_mgmt_recv(NvmeCtrl *n, NvmeNamespace *ns, return NVME_INVALID_FIELD | NVME_DNR; } - if (zra == NVME_ZONE_REPORT_EXTENDED) { + if (zra == NVME_ZONE_REPORT_EXTENDED && !n->params.zd_extension_size) { return NVME_INVALID_FIELD | NVME_DNR; } @@ -1540,6 +1581,9 @@ static uint16_t nvme_zone_mgmt_recv(NvmeCtrl *n, NvmeNamespace *ns, partial = (dw13 >> 16) & 0x01; zone_entry_sz = sizeof(NvmeZoneDescr); + if (zra == NVME_ZONE_REPORT_EXTENDED) { + zone_entry_sz += n->params.zd_extension_size; + } max_zones = (len - sizeof(NvmeZoneReportHeader)) / zone_entry_sz; buf = g_malloc0(len); @@ -1571,6 +1615,14 @@ static uint16_t nvme_zone_mgmt_recv(NvmeCtrl *n, NvmeNamespace *ns, z->wp = cpu_to_le64(~0ULL); } + if (zra == NVME_ZONE_REPORT_EXTENDED) { + if (zs->d.za & NVME_ZA_ZD_EXT_VALID) { + memcpy(buf_p, nvme_get_zd_extension(n, ns, zone_index), + n->params.zd_extension_size); + } + buf_p += n->params.zd_extension_size; + } + zone_index++; } @@ -2337,7 +2389,7 @@ static uint16_t nvme_handle_changed_zone_log(NvmeCtrl *n, NvmeCmd *cmd, continue; } num_aen_zones++; - if (zone->d.za) { + if (zone->d.za & ~NVME_ZA_ZD_EXT_VALID) { trace_pci_nvme_reporting_changed_zone(zone->d.zslba, zone->d.za); *zid_ptr++ = cpu_to_le64(zone->d.zslba); nids++; @@ -2936,6 +2988,7 @@ static int nvme_init_zone_meta(NvmeCtrl *n, NvmeNamespace *ns, ns->imp_open_zones = g_malloc0(sizeof(NvmeZoneList)); ns->closed_zones = g_malloc0(sizeof(NvmeZoneList)); ns->full_zones = g_malloc0(sizeof(NvmeZoneList)); + ns->zd_extensions = g_malloc0(n->params.zd_extension_size * n->num_zones); zone = ns->zone_array; nvme_init_zone_list(ns->exp_open_zones); @@ -3010,6 +3063,17 @@ static void nvme_zoned_init_ctrl(NvmeCtrl *n, Error **errp) if (n->params.max_active_zones > nz) { n->params.max_active_zones = nz; } + if (n->params.zd_extension_size) { + if (n->params.zd_extension_size & 0x3f) { + error_setg(errp, + "zone descriptor extension size must be a multiple of 64B"); + return; + } + if ((n->params.zd_extension_size >> 6) > 0xff) { + error_setg(errp, "zone descriptor extension size is too large"); + return; + } + } if (n->params.zone_async_events) { n->ae_cfg |= NVME_AEN_CFG_ZONE_DESCR_CHNGD_NOTICES; @@ -3040,7 +3104,8 @@ static int nvme_zoned_init_ns(NvmeCtrl *n, NvmeNamespace *ns, int lba_index, ns->id_ns_zoned->ozcs = n->params.cross_zone_read ? 0x01 : 0x00; ns->id_ns_zoned->lbafe[lba_index].zsze = cpu_to_le64(n->params.zone_size); - ns->id_ns_zoned->lbafe[lba_index].zdes = 0; + ns->id_ns_zoned->lbafe[lba_index].zdes = + n->params.zd_extension_size >> 6; /* Units of 64B */ if (n->params.fill_pattern == 0) { ns->id_ns.dlfeat = 0x01; @@ -3063,6 +3128,7 @@ static void nvme_zoned_clear(NvmeCtrl *n) g_free(ns->imp_open_zones); g_free(ns->closed_zones); g_free(ns->full_zones); + g_free(ns->zd_extensions); } } @@ -3396,6 +3462,8 @@ static Property nvme_props[] = { DEFINE_PROP_UINT64("zone_size", NvmeCtrl, params.zone_size, 512), DEFINE_PROP_UINT64("zone_capacity", NvmeCtrl, params.zone_capacity, 512), DEFINE_PROP_UINT32("zone_append_max_size", NvmeCtrl, params.zamds_bs, 0), + DEFINE_PROP_UINT32("zone_descr_ext_size", NvmeCtrl, + params.zd_extension_size, 0), DEFINE_PROP_INT32("max_active", NvmeCtrl, params.max_active_zones, 0), DEFINE_PROP_INT32("max_open", NvmeCtrl, params.max_open_zones, 0), DEFINE_PROP_UINT64("reset_rcmnd_delay", NvmeCtrl, params.rzr_delay_usec, 0), diff --git a/hw/block/nvme.h b/hw/block/nvme.h index e63f7736d7..4251295917 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -24,6 +24,7 @@ typedef struct NvmeParams { uint64_t zone_capacity; int32_t max_active_zones; int32_t max_open_zones; + uint32_t zd_extension_size; uint64_t rzr_delay_usec; uint64_t rrl_usec; uint64_t fzr_delay_usec; @@ -123,6 +124,7 @@ typedef struct NvmeNamespace { NvmeZoneList *imp_open_zones; NvmeZoneList *closed_zones; NvmeZoneList *full_zones; + uint8_t *zd_extensions; int32_t nr_open_zones; int32_t nr_active_zones; bool aen_pending; @@ -221,6 +223,12 @@ static inline bool nvme_wp_is_valid(NvmeZone *zone) st != NVME_ZONE_STATE_OFFLINE; } +static inline uint8_t *nvme_get_zd_extension(NvmeCtrl *n, + NvmeNamespace *ns, uint32_t zone_idx) +{ + return &ns->zd_extensions[zone_idx * n->params.zd_extension_size]; +} + /* * Initialize a zone list head. */ From patchwork Wed Jun 17 21:34:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Fomichev X-Patchwork-Id: 11610773 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F24CF90 for ; Wed, 17 Jun 2020 21:57:57 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C5A3821841 for ; Wed, 17 Jun 2020 21:57:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="L7bSmZHA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C5A3821841 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:43488 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jlg4D-0003ZI-35 for patchwork-qemu-devel@patchwork.kernel.org; Wed, 17 Jun 2020 17:57:57 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46368) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfih-00057c-TH; Wed, 17 Jun 2020 17:35:45 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:29866) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfid-0005I4-Uc; Wed, 17 Jun 2020 17:35:43 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1592429739; x=1623965739; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hCiBihXKhrGYvmr/DCEkxrCWLvTyFSwDX1pbG2ju8+8=; b=L7bSmZHAreNwuUsEiJrOAiU0eGqjEOWs9OtR13YXfPgRxWwCG4jaETYh tIaMl79WaD4wcOXSiu4b+iOijluUyIR6Ok5MOdb4PN0gBTDKMtlXxcZTD Q4S1anqY2HThoXPlIC2nv9sUsjEP7v9Bvnr/2DkG2wOYv71pYibc4yjoq bd71Zq45L+rZoyxGqZykRZsZmoopDoqvD9IdRPHG9KFivXS4Y53k/zJ6D F7swZldXGOjm7xGll9nr6arLWVsBw1EplOaHS/FoosYgmkSLNIiNqeHE+ f7hsonzrwmJ1J6Wr5PMT+c1MnCVqC6UXXJvt38QOGqUkeHXxRKyrXsRFm Q==; IronPort-SDR: EnTkio7aGhYn/2VW0MDwsZ0Q8wUVW1P6pOeLPKWD4CQb+lTnWzGfNV3Ts+9fFQiev8exwm8fV9 TAIEuuLrEaF268UXar8k2i1QoGnJdiOBnlqKeuN5T00zAPnugWmFYoBNnCeWbKh8qHT3q+aMVW q1rN5Rx+I/s5ADpyCAyiTWgtl/JC91jvnSSlpE2ceIqsA24DaQzvTHW1hqhVindzBMUWbdnGbM Mwmg1jOFMSr7bnTssZ9NTRkBsdPHLftui607BJ5syW+SpliPYQsEbXJTdQ6OEZeV6f/f3OgOEj JIU= X-IronPort-AV: E=Sophos;i="5.73,523,1583164800"; d="scan'208";a="249439841" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 18 Jun 2020 05:34:57 +0800 IronPort-SDR: /yYqzkwZy+dnecLa3iR9Yf8FNswy/XsEeYOjDaV+84C5aoOqBlJpJicex2iZMOawqV26H/4m9K 7Bn455yecqINhAi4E9YBaQLmwebj2MMfo= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 14:23:38 -0700 IronPort-SDR: hw51t470GWvgcY65yZZ7hglqP/N/sdjX5eGTo0rOkaB6oyTC08jTVLpdFFl31qmZGqd2TneR9A qPkwNMjuQvXA== WDCIronportException: Internal Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 17 Jun 2020 14:34:56 -0700 From: Dmitry Fomichev To: Kevin Wolf , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Maxim Levitsky Subject: [PATCH v2 16/18] hw/block/nvme: Add injection of Offline/Read-Only zones Date: Thu, 18 Jun 2020 06:34:13 +0900 Message-Id: <20200617213415.22417-17-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200617213415.22417-1-dmitry.fomichev@wdc.com> References: <20200617213415.22417-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=430b82a1d=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/17 17:34:28 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Matias Bjorling Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" ZNS specification defines two zone conditions for the zones that no longer can function properly, possibly because of flash wear or other internal fault. It is useful to be able to "inject" a small number of such zones for testing purposes. This commit defines two optional driver properties, "offline_zones" and "rdonly_zones". User can assign non-zero values to these variables to specify the number of zones to be initialized as Offline or Read-Only. The actual number of injected zones may be smaller than the requested amount - Read-Only and Offline counts are expected to be much smaller than the total number of drive zones. Signed-off-by: Dmitry Fomichev --- hw/block/nvme.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++ hw/block/nvme.h | 2 ++ 2 files changed, 48 insertions(+) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index eb41081627..14d5f1d155 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -2980,8 +2980,11 @@ static int nvme_init_zone_meta(NvmeCtrl *n, NvmeNamespace *ns, uint64_t capacity) { NvmeZone *zone; + Error *err; uint64_t start = 0, zone_size = n->params.zone_size; + uint32_t rnd; int i; + uint16_t zs; ns->zone_array = g_malloc0(n->zone_array_size); ns->exp_open_zones = g_malloc0(sizeof(NvmeZoneList)); @@ -3011,6 +3014,37 @@ static int nvme_init_zone_meta(NvmeCtrl *n, NvmeNamespace *ns, start += zone_size; } + /* If required, make some zones Offline or Read Only */ + + for (i = 0; i < n->params.nr_offline_zones; i++) { + do { + qcrypto_random_bytes(&rnd, sizeof(rnd), &err); + rnd %= n->num_zones; + } while (rnd < n->params.max_open_zones); + zone = &ns->zone_array[rnd]; + zs = nvme_get_zone_state(zone); + if (zs != NVME_ZONE_STATE_OFFLINE) { + nvme_set_zone_state(zone, NVME_ZONE_STATE_OFFLINE); + } else { + i--; + } + } + + for (i = 0; i < n->params.nr_rdonly_zones; i++) { + do { + qcrypto_random_bytes(&rnd, sizeof(rnd), &err); + rnd %= n->num_zones; + } while (rnd < n->params.max_open_zones); + zone = &ns->zone_array[rnd]; + zs = nvme_get_zone_state(zone); + if (zs != NVME_ZONE_STATE_OFFLINE && + zs != NVME_ZONE_STATE_READ_ONLY) { + nvme_set_zone_state(zone, NVME_ZONE_STATE_READ_ONLY); + } else { + i--; + } + } + return 0; } @@ -3063,6 +3097,16 @@ static void nvme_zoned_init_ctrl(NvmeCtrl *n, Error **errp) if (n->params.max_active_zones > nz) { n->params.max_active_zones = nz; } + if (n->params.max_open_zones < nz) { + if (n->params.nr_offline_zones > nz - n->params.max_open_zones) { + n->params.nr_offline_zones = nz - n->params.max_open_zones; + } + if (n->params.nr_rdonly_zones > + nz - n->params.max_open_zones - n->params.nr_offline_zones) { + n->params.nr_rdonly_zones = + nz - n->params.max_open_zones - n->params.nr_offline_zones; + } + } if (n->params.zd_extension_size) { if (n->params.zd_extension_size & 0x3f) { error_setg(errp, @@ -3471,6 +3515,8 @@ static Property nvme_props[] = { DEFINE_PROP_UINT64("finish_rcmnd_delay", NvmeCtrl, params.fzr_delay_usec, 0), DEFINE_PROP_UINT64("finish_rcmnd_limit", NvmeCtrl, params.frl_usec, 0), + DEFINE_PROP_UINT32("offline_zones", NvmeCtrl, params.nr_offline_zones, 0), + DEFINE_PROP_UINT32("rdonly_zones", NvmeCtrl, params.nr_rdonly_zones, 0), DEFINE_PROP_BOOL("zone_async_events", NvmeCtrl, params.zone_async_events, true), DEFINE_PROP_BOOL("cross_zone_read", NvmeCtrl, params.cross_zone_read, true), diff --git a/hw/block/nvme.h b/hw/block/nvme.h index 4251295917..900fc54809 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -24,6 +24,8 @@ typedef struct NvmeParams { uint64_t zone_capacity; int32_t max_active_zones; int32_t max_open_zones; + uint32_t nr_offline_zones; + uint32_t nr_rdonly_zones; uint32_t zd_extension_size; uint64_t rzr_delay_usec; uint64_t rrl_usec; From patchwork Wed Jun 17 21:34:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Fomichev X-Patchwork-Id: 11610763 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C8BD013B1 for ; Wed, 17 Jun 2020 21:53:59 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8F5B821556 for ; Wed, 17 Jun 2020 21:53:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="nr3rpzfh" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8F5B821556 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:56538 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jlg0M-0005FT-Oj for patchwork-qemu-devel@patchwork.kernel.org; Wed, 17 Jun 2020 17:53:58 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46416) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfin-0005Bp-Fd; Wed, 17 Jun 2020 17:35:50 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:29879) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfii-0005JL-1q; Wed, 17 Jun 2020 17:35:49 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1592429743; x=1623965743; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lJK5bNTrhZ/q8mT/SrRbuq0TWjcwYui8AmoM1EvfL/0=; b=nr3rpzfhsnYZ0l98BNIEdstQCgAolX0cpsa0DSqDdCSGoMIWEKR/EJDo B9I06SNlr6RuA46S2e34DcwO7E0tvwZIou7n1p5B/bNqbBr9VSgFh3MfC OuLC265RtmzmuZXlAdZDXmu2rrbFmMNREjSTGkoV/N3UQcFpqUUjTnlG6 DXBQu2LkfZThsJ1eu8qAb2RIZ2zzR3Olw82aOWDyr9Pwc1EUVgCt27XSI geEfGtakoejtddNRM9mh7+o0c4IxKYSBTPT9Me5Q6Vy/OCyNs9O41DCBc dnlHdPR9GCntFsSJJE2tl1Z+8EPUsDn4Qasmjy1UQ7ShF8YS9s+6/gJc/ g==; IronPort-SDR: ICzeSI3igO7c7uXxj0ei+qzfwx/qocskFoMK/bWJ/1aM7xb93rqTJJ/z1dWVqmTWb9cZGGiN10 geRezqDvK6tZc3lefS6H9iVTd/cpHVQ08MZ7XEa4EJsdcijhpkxzT0aRfRNLoGvRaH6PobrQXF p2zGhMC9A42gB/B2ouMLLIhriFtLB3cYyTwCnfxRw2jK4FACvmt64mM2qmo1yr0WdC59pZ0EzU U63RRhf6x2D4ZMN8BGBBIDiR6Jklh8kNtyM4dwvtMTkjLul8BwLszhFsQ3KfdlaIZkW14gvvdd 9L8= X-IronPort-AV: E=Sophos;i="5.73,523,1583164800"; d="scan'208";a="249439846" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 18 Jun 2020 05:34:58 +0800 IronPort-SDR: 1PrmnDuei/RSAcR02qBLNblCIpyOfFZMTlFSJrDEqcbAGmQlDLgCpAvCTIMmK5mMrfj5LdnMoU PmGbQh3CFBhByqZvGimFLfQfSq0f87voc= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 14:23:40 -0700 IronPort-SDR: BzKf2ZP/cRb+0V/Xg6UoLNx7wcXYGqdHOLf1shZ05ncYNm/s86HB6hEXMZkGWlC9CS6GOXBf+D xh1tm4FABzZQ== WDCIronportException: Internal Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 17 Jun 2020 14:34:57 -0700 From: Dmitry Fomichev To: Kevin Wolf , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Maxim Levitsky Subject: [PATCH v2 17/18] hw/block/nvme: Use zone metadata file for persistence Date: Thu, 18 Jun 2020 06:34:14 +0900 Message-Id: <20200617213415.22417-18-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200617213415.22417-1-dmitry.fomichev@wdc.com> References: <20200617213415.22417-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=430b82a1d=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/17 17:34:28 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Matias Bjorling Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" A ZNS drive that is emulated by this driver is currently initialized with all zones Empty upon startup. However, actual ZNS SSDs save the state and condition of all zones in their internal NVRAM in the event of power loss. When such a drive is powered up again, it closes or finishes all zones that were open at the moment of shutdown. Besides that, the write pointer position as well as the state and condition of all zones is preserved across power-downs. This commit adds the capability to have a persistent zone metadata to the driver. The new optional driver property, "zone_file", is introduced. If added to the command line, this property specifies the name of the file that stores the zone metadata. If "zone_file" is omitted, the driver will initialize with all zones empty, the same as before. If zone metadata is configured to be persistent, then zone descriptor extensions also persist across controller shutdowns. Signed-off-by: Dmitry Fomichev --- hw/block/nvme.c | 371 +++++++++++++++++++++++++++++++++++++++++++++--- hw/block/nvme.h | 38 +++++ 2 files changed, 388 insertions(+), 21 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 14d5f1d155..63e7a6352e 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -69,6 +69,8 @@ } while (0) static void nvme_process_sq(void *opaque); +static void nvme_sync_zone_file(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone, int len); /* * Add a zone to the tail of a zone list. @@ -90,6 +92,7 @@ static void nvme_add_zone_tail(NvmeCtrl *n, NvmeNamespace *ns, NvmeZoneList *zl, zl->tail = idx; } zl->size++; + nvme_set_zone_meta_dirty(n, ns, true); } /* @@ -106,12 +109,15 @@ static void nvme_remove_zone(NvmeCtrl *n, NvmeNamespace *ns, NvmeZoneList *zl, if (zl->size == 0) { zl->head = NVME_ZONE_LIST_NIL; zl->tail = NVME_ZONE_LIST_NIL; + nvme_set_zone_meta_dirty(n, ns, true); } else if (idx == zl->head) { zl->head = zone->next; ns->zone_array[zl->head].prev = NVME_ZONE_LIST_NIL; + nvme_set_zone_meta_dirty(n, ns, true); } else if (idx == zl->tail) { zl->tail = zone->prev; ns->zone_array[zl->tail].next = NVME_ZONE_LIST_NIL; + nvme_set_zone_meta_dirty(n, ns, true); } else { ns->zone_array[zone->next].prev = zone->prev; ns->zone_array[zone->prev].next = zone->next; @@ -138,6 +144,7 @@ static NvmeZone *nvme_remove_zone_head(NvmeCtrl *n, NvmeNamespace *ns, ns->zone_array[zl->head].prev = NVME_ZONE_LIST_NIL; } zone->prev = zone->next = 0; + nvme_set_zone_meta_dirty(n, ns, true); } return zone; @@ -476,6 +483,7 @@ static void nvme_assign_zone_state(NvmeCtrl *n, NvmeNamespace *ns, case NVME_ZONE_STATE_READ_ONLY: zone->tstamp = 0; } + nvme_sync_zone_file(n, ns, zone, sizeof(NvmeZone)); } static bool nvme_addr_is_cmb(NvmeCtrl *n, hwaddr addr) @@ -2976,9 +2984,114 @@ static const MemoryRegionOps nvme_cmb_ops = { }, }; -static int nvme_init_zone_meta(NvmeCtrl *n, NvmeNamespace *ns, +static int nvme_validate_zone_file(NvmeCtrl *n, NvmeNamespace *ns, uint64_t capacity) { + NvmeZoneMeta *meta = ns->zone_meta; + NvmeZone *zone = ns->zone_array; + uint64_t start = 0, zone_size = n->params.zone_size; + int i, n_imp_open = 0, n_exp_open = 0, n_closed = 0, n_full = 0; + + if (meta->magic != NVME_ZONE_META_MAGIC) { + return 1; + } + if (meta->version != NVME_ZONE_META_VER) { + return 2; + } + if (meta->zone_size != zone_size) { + return 3; + } + if (meta->zone_capacity != n->params.zone_capacity) { + return 4; + } + if (meta->nr_offline_zones != n->params.nr_offline_zones) { + return 5; + } + if (meta->nr_rdonly_zones != n->params.nr_rdonly_zones) { + return 6; + } + if (meta->lba_size != n->conf.logical_block_size) { + return 7; + } + if (meta->zd_extension_size != n->params.zd_extension_size) { + return 8; + } + + for (i = 0; i < n->num_zones; i++, zone++) { + if (start + zone_size > capacity) { + zone_size = capacity - start; + } + if (zone->d.zt != NVME_ZONE_TYPE_SEQ_WRITE) { + return 9; + } + if (zone->d.zcap != n->params.zone_capacity) { + return 10; + } + if (zone->d.zslba != start) { + return 11; + } + switch (nvme_get_zone_state(zone)) { + case NVME_ZONE_STATE_EMPTY: + case NVME_ZONE_STATE_OFFLINE: + case NVME_ZONE_STATE_READ_ONLY: + if (zone->d.wp != start) { + return 12; + } + break; + case NVME_ZONE_STATE_IMPLICITLY_OPEN: + if (zone->d.wp < start || + zone->d.wp >= zone->d.zslba + zone->d.zcap) { + return 13; + } + n_imp_open++; + break; + case NVME_ZONE_STATE_EXPLICITLY_OPEN: + if (zone->d.wp < start || + zone->d.wp >= zone->d.zslba + zone->d.zcap) { + return 13; + } + n_exp_open++; + break; + case NVME_ZONE_STATE_CLOSED: + if (zone->d.wp < start || + zone->d.wp >= zone->d.zslba + zone->d.zcap) { + return 13; + } + n_closed++; + break; + case NVME_ZONE_STATE_FULL: + if (zone->d.wp != zone->d.zslba + zone->d.zcap) { + return 14; + } + n_full++; + break; + default: + return 15; + } + + start += zone_size; + } + + if (n_imp_open != nvme_zone_list_size(ns->exp_open_zones)) { + return 16; + } + if (n_exp_open != nvme_zone_list_size(ns->imp_open_zones)) { + return 17; + } + if (n_closed != nvme_zone_list_size(ns->closed_zones)) { + return 18; + } + if (n_full != nvme_zone_list_size(ns->full_zones)) { + return 19; + } + + return 0; +} + +static int nvme_init_zone_file(NvmeCtrl *n, NvmeNamespace *ns, + uint64_t capacity) +{ + NvmeZoneMeta *meta = ns->zone_meta; NvmeZone *zone; Error *err; uint64_t start = 0, zone_size = n->params.zone_size; @@ -2986,18 +3099,33 @@ static int nvme_init_zone_meta(NvmeCtrl *n, NvmeNamespace *ns, int i; uint16_t zs; - ns->zone_array = g_malloc0(n->zone_array_size); - ns->exp_open_zones = g_malloc0(sizeof(NvmeZoneList)); - ns->imp_open_zones = g_malloc0(sizeof(NvmeZoneList)); - ns->closed_zones = g_malloc0(sizeof(NvmeZoneList)); - ns->full_zones = g_malloc0(sizeof(NvmeZoneList)); - ns->zd_extensions = g_malloc0(n->params.zd_extension_size * n->num_zones); + if (n->params.zone_file) { + meta->magic = NVME_ZONE_META_MAGIC; + meta->version = NVME_ZONE_META_VER; + meta->zone_size = zone_size; + meta->zone_capacity = n->params.zone_capacity; + meta->lba_size = n->conf.logical_block_size; + meta->nr_offline_zones = n->params.nr_offline_zones; + meta->nr_rdonly_zones = n->params.nr_rdonly_zones; + meta->zd_extension_size = n->params.zd_extension_size; + } else { + ns->zone_array = g_malloc0(n->zone_array_size); + ns->exp_open_zones = g_malloc0(sizeof(NvmeZoneList)); + ns->imp_open_zones = g_malloc0(sizeof(NvmeZoneList)); + ns->closed_zones = g_malloc0(sizeof(NvmeZoneList)); + ns->full_zones = g_malloc0(sizeof(NvmeZoneList)); + ns->zd_extensions = + g_malloc0(n->params.zd_extension_size * n->num_zones); + } zone = ns->zone_array; nvme_init_zone_list(ns->exp_open_zones); nvme_init_zone_list(ns->imp_open_zones); nvme_init_zone_list(ns->closed_zones); nvme_init_zone_list(ns->full_zones); + if (n->params.zone_file) { + nvme_set_zone_meta_dirty(n, ns, true); + } for (i = 0; i < n->num_zones; i++, zone++) { if (start + zone_size > capacity) { @@ -3048,7 +3176,189 @@ static int nvme_init_zone_meta(NvmeCtrl *n, NvmeNamespace *ns, return 0; } -static void nvme_zoned_init_ctrl(NvmeCtrl *n, Error **errp) +static int nvme_open_zone_file(NvmeCtrl *n, bool *init_meta) +{ + struct stat statbuf; + size_t fsize; + int ret; + + ret = stat(n->params.zone_file, &statbuf); + if (ret && errno == ENOENT) { + *init_meta = true; + } else if (!S_ISREG(statbuf.st_mode)) { + fprintf(stderr, "%s is not a regular file\n", strerror(errno)); + return -1; + } + + n->zone_file_fd = open(n->params.zone_file, + O_RDWR | O_LARGEFILE | O_BINARY | O_CREAT, 644); + if (n->zone_file_fd < 0) { + fprintf(stderr, "failed to create zone file %s, err %s\n", + n->params.zone_file, strerror(errno)); + return -1; + } + + fsize = n->meta_size * n->num_namespaces; + + if (stat(n->params.zone_file, &statbuf)) { + fprintf(stderr, "can't stat zone file %s, err %s\n", + n->params.zone_file, strerror(errno)); + return -1; + } + if (statbuf.st_size != fsize) { + ret = ftruncate(n->zone_file_fd, fsize); + if (ret < 0) { + fprintf(stderr, "can't truncate zone file %s, err %s\n", + n->params.zone_file, strerror(errno)); + return -1; + } + *init_meta = true; + } + + return 0; +} + +static int nvme_map_zone_file(NvmeCtrl *n, NvmeNamespace *ns, bool *init_meta) +{ + off_t meta_ofs = n->meta_size * (ns->nsid - 1); + + ns->zone_meta = mmap(0, n->meta_size, PROT_READ | PROT_WRITE, + MAP_SHARED, n->zone_file_fd, meta_ofs); + if (ns->zone_meta == MAP_FAILED) { + fprintf(stderr, "failed to map zone file %s, ofs %lu, err %s\n", + n->params.zone_file, meta_ofs, strerror(errno)); + return -1; + } + + ns->zone_array = (NvmeZone *)(ns->zone_meta + 1); + ns->exp_open_zones = &ns->zone_meta->exp_open_zones; + ns->imp_open_zones = &ns->zone_meta->imp_open_zones; + ns->closed_zones = &ns->zone_meta->closed_zones; + ns->full_zones = &ns->zone_meta->full_zones; + + if (n->params.zd_extension_size) { + ns->zd_extensions = (uint8_t *)(ns->zone_meta + 1); + ns->zd_extensions += n->zone_array_size; + } + + return 0; +} + +static void nvme_sync_zone_file(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone, int len) +{ + uintptr_t addr, zd = (uintptr_t)zone; + + addr = zd & qemu_real_host_page_mask; + len += zd - addr; + if (msync((void *)addr, len, MS_ASYNC) < 0) + fprintf(stderr, "msync: failed to sync zone descriptors, file %s\n", + strerror(errno)); + + if (nvme_zone_meta_dirty(n, ns)) { + nvme_set_zone_meta_dirty(n, ns, false); + if (msync(ns->zone_meta, sizeof(NvmeZoneMeta), MS_ASYNC) < 0) + fprintf(stderr, "msync: failed to sync zone meta, file %s\n", + strerror(errno)); + } +} + +/* + * Close or finish all the zones that might be still open after power-down. + */ +static void nvme_prepare_zones(NvmeCtrl *n, NvmeNamespace *ns) +{ + NvmeZone *zone; + uint32_t set_state; + int i; + + assert(!ns->nr_active_zones); + assert(!ns->nr_open_zones); + + zone = ns->zone_array; + for (i = 0; i < n->num_zones; i++, zone++) { + zone->flags = 0; + zone->tstamp = 0; + + switch (nvme_get_zone_state(zone)) { + case NVME_ZONE_STATE_IMPLICITLY_OPEN: + case NVME_ZONE_STATE_EXPLICITLY_OPEN: + break; + case NVME_ZONE_STATE_CLOSED: + nvme_aor_inc_active(n, ns); + /* pass through */ + default: + continue; + } + + if (zone->d.za & NVME_ZA_ZD_EXT_VALID) { + set_state = NVME_ZONE_STATE_CLOSED; + } else if (zone->d.wp == zone->d.zslba) { + set_state = NVME_ZONE_STATE_EMPTY; + } else if (n->params.max_active_zones == 0 || + ns->nr_active_zones < n->params.max_active_zones) { + set_state = NVME_ZONE_STATE_CLOSED; + } else { + set_state = NVME_ZONE_STATE_FULL; + } + + switch (set_state) { + case NVME_ZONE_STATE_CLOSED: + trace_pci_nvme_power_on_close(nvme_get_zone_state(zone), + zone->d.zslba); + nvme_aor_inc_active(n, ns); + nvme_add_zone_tail(n, ns, ns->closed_zones, zone); + break; + case NVME_ZONE_STATE_EMPTY: + trace_pci_nvme_power_on_reset(nvme_get_zone_state(zone), + zone->d.zslba); + break; + case NVME_ZONE_STATE_FULL: + trace_pci_nvme_power_on_full(nvme_get_zone_state(zone), + zone->d.zslba); + zone->d.wp = nvme_zone_wr_boundary(zone); + } + + nvme_set_zone_state(zone, set_state); + } +} + +static int nvme_load_zone_meta(NvmeCtrl *n, NvmeNamespace *ns, + uint64_t capacity, bool init_meta) +{ + int ret = 0; + + if (n->params.zone_file) { + ret = nvme_map_zone_file(n, ns, &init_meta); + trace_pci_nvme_mapped_zone_file(n->params.zone_file, ret); + if (ret < 0) { + return ret; + } + + if (!init_meta) { + ret = nvme_validate_zone_file(n, ns, capacity); + if (ret) { + trace_pci_nvme_err_zone_file_invalid(ret); + init_meta = true; + } + } + } else { + init_meta = true; + } + + if (init_meta) { + ret = nvme_init_zone_file(n, ns, capacity); + } else { + nvme_prepare_zones(n, ns); + } + if (!ret && n->params.zone_file) { + nvme_sync_zone_file(n, ns, ns->zone_array, n->zone_array_size); + } + + return ret; +} + +static void nvme_zoned_init_ctrl(NvmeCtrl *n, bool *init_meta, Error **errp) { uint64_t zone_size = 0, capacity; uint32_t nz; @@ -3084,6 +3394,9 @@ static void nvme_zoned_init_ctrl(NvmeCtrl *n, Error **errp) nz = DIV_ROUND_UP(capacity, zone_size); n->num_zones = nz; n->zone_array_size = sizeof(NvmeZone) * nz; + n->meta_size = sizeof(NvmeZoneMeta) + n->zone_array_size + + nz * n->params.zd_extension_size; + n->meta_size = ROUND_UP(n->meta_size, qemu_real_host_page_size); n->params.rzr_delay_usec *= SCALE_MS; n->params.rrl_usec *= SCALE_MS; @@ -3119,6 +3432,13 @@ static void nvme_zoned_init_ctrl(NvmeCtrl *n, Error **errp) } } + if (n->params.zone_file) { + if (nvme_open_zone_file(n, init_meta) < 0) { + error_setg(errp, "cannot open zone metadata file"); + return; + } + } + if (n->params.zone_async_events) { n->ae_cfg |= NVME_AEN_CFG_ZONE_DESCR_CHNGD_NOTICES; } @@ -3127,13 +3447,14 @@ static void nvme_zoned_init_ctrl(NvmeCtrl *n, Error **errp) } static int nvme_zoned_init_ns(NvmeCtrl *n, NvmeNamespace *ns, int lba_index, - Error **errp) + bool init_meta, Error **errp) { int ret; - ret = nvme_init_zone_meta(n, ns, n->num_zones * n->params.zone_size); + ret = nvme_load_zone_meta(n, ns, n->num_zones * n->params.zone_size, + init_meta); if (ret) { - error_setg(errp, "could not init zone metadata"); + error_setg(errp, "could not load/init zone metadata"); return -1; } @@ -3164,15 +3485,20 @@ static void nvme_zoned_clear(NvmeCtrl *n) { int i; + if (n->params.zone_file) { + close(n->zone_file_fd); + } for (i = 0; i < n->num_namespaces; i++) { NvmeNamespace *ns = &n->namespaces[i]; g_free(ns->id_ns_zoned); - g_free(ns->zone_array); - g_free(ns->exp_open_zones); - g_free(ns->imp_open_zones); - g_free(ns->closed_zones); - g_free(ns->full_zones); - g_free(ns->zd_extensions); + if (!n->params.zone_file) { + g_free(ns->zone_array); + g_free(ns->exp_open_zones); + g_free(ns->imp_open_zones); + g_free(ns->closed_zones); + g_free(ns->full_zones); + g_free(ns->zd_extensions); + } } } @@ -3258,7 +3584,8 @@ static void nvme_init_blk(NvmeCtrl *n, Error **errp) n->ns_size = bs_size; } -static void nvme_init_namespace(NvmeCtrl *n, NvmeNamespace *ns, Error **errp) +static void nvme_init_namespace(NvmeCtrl *n, NvmeNamespace *ns, bool init_meta, + Error **errp) { NvmeIdNs *id_ns = &ns->id_ns; int lba_index; @@ -3272,7 +3599,7 @@ static void nvme_init_namespace(NvmeCtrl *n, NvmeNamespace *ns, Error **errp) if (n->params.zoned) { ns->csi = NVME_CSI_ZONED; id_ns->ncap = cpu_to_le64(n->params.zone_capacity * n->num_zones); - if (nvme_zoned_init_ns(n, ns, lba_index, errp) != 0) { + if (nvme_zoned_init_ns(n, ns, lba_index, init_meta, errp) != 0) { return; } } else { @@ -3429,6 +3756,7 @@ static void nvme_realize(PCIDevice *pci_dev, Error **errp) NvmeCtrl *n = NVME(pci_dev); NvmeNamespace *ns; Error *local_err = NULL; + bool init_meta = false; int i; @@ -3452,7 +3780,7 @@ static void nvme_realize(PCIDevice *pci_dev, Error **errp) } if (n->params.zoned) { - nvme_zoned_init_ctrl(n, &local_err); + nvme_zoned_init_ctrl(n, &init_meta, &local_err); if (local_err) { error_propagate(errp, local_err); return; @@ -3463,7 +3791,7 @@ static void nvme_realize(PCIDevice *pci_dev, Error **errp) ns = n->namespaces; for (i = 0; i < n->num_namespaces; i++, ns++) { ns->nsid = i + 1; - nvme_init_namespace(n, ns, &local_err); + nvme_init_namespace(n, ns, init_meta, &local_err); if (local_err) { error_propagate(errp, local_err); return; @@ -3506,6 +3834,7 @@ static Property nvme_props[] = { DEFINE_PROP_UINT64("zone_size", NvmeCtrl, params.zone_size, 512), DEFINE_PROP_UINT64("zone_capacity", NvmeCtrl, params.zone_capacity, 512), DEFINE_PROP_UINT32("zone_append_max_size", NvmeCtrl, params.zamds_bs, 0), + DEFINE_PROP_STRING("zone_file", NvmeCtrl, params.zone_file), DEFINE_PROP_UINT32("zone_descr_ext_size", NvmeCtrl, params.zd_extension_size, 0), DEFINE_PROP_INT32("max_active", NvmeCtrl, params.max_active_zones, 0), diff --git a/hw/block/nvme.h b/hw/block/nvme.h index 900fc54809..5e9a3a62f7 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -14,6 +14,7 @@ typedef struct NvmeParams { uint16_t msix_qsize; uint32_t cmb_size_mb; + char *zone_file; bool zoned; bool cross_zone_read; bool zone_async_events; @@ -114,6 +115,27 @@ typedef struct NvmeZoneList { uint8_t rsvd12[4]; } NvmeZoneList; +#define NVME_ZONE_META_MAGIC 0x3aebaa70 +#define NVME_ZONE_META_VER 1 + +typedef struct NvmeZoneMeta { + uint32_t magic; + uint32_t version; + uint64_t zone_size; + uint64_t zone_capacity; + uint32_t nr_offline_zones; + uint32_t nr_rdonly_zones; + uint32_t lba_size; + uint32_t rsvd40; + NvmeZoneList exp_open_zones; + NvmeZoneList imp_open_zones; + NvmeZoneList closed_zones; + NvmeZoneList full_zones; + uint8_t zd_extension_size; + uint8_t dirty; + uint8_t rsvd594[3990]; +} NvmeZoneMeta; + typedef struct NvmeNamespace { NvmeIdNs id_ns; uint32_t nsid; @@ -122,6 +144,7 @@ typedef struct NvmeNamespace { NvmeIdNsZoned *id_ns_zoned; NvmeZone *zone_array; + NvmeZoneMeta *zone_meta; NvmeZoneList *exp_open_zones; NvmeZoneList *imp_open_zones; NvmeZoneList *closed_zones; @@ -174,6 +197,7 @@ typedef struct NvmeCtrl { int zone_file_fd; uint32_t num_zones; + size_t meta_size; uint64_t zone_size_bs; uint64_t zone_array_size; uint8_t zamds; @@ -282,6 +306,19 @@ static inline NvmeZone *nvme_next_zone_in_list(NvmeNamespace *ns, NvmeZone *z, return &ns->zone_array[z->next]; } +static inline bool nvme_zone_meta_dirty(NvmeCtrl *n, NvmeNamespace *ns) +{ + return n->params.zone_file ? ns->zone_meta->dirty : false; +} + +static inline void nvme_set_zone_meta_dirty(NvmeCtrl *n, NvmeNamespace *ns, + bool yesno) +{ + if (n->params.zone_file) { + ns->zone_meta->dirty = yesno; + } +} + static inline int nvme_ilog2(uint64_t i) { int log = -1; @@ -295,6 +332,7 @@ static inline int nvme_ilog2(uint64_t i) static inline void _hw_nvme_check_size(void) { + QEMU_BUILD_BUG_ON(sizeof(NvmeZoneMeta) != 4096); QEMU_BUILD_BUG_ON(sizeof(NvmeZoneList) != 16); QEMU_BUILD_BUG_ON(sizeof(NvmeZone) != 88); } From patchwork Wed Jun 17 21:34:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Fomichev X-Patchwork-Id: 11610777 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0CBCE13B1 for ; Wed, 17 Jun 2020 21:59:45 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D80C221841 for ; Wed, 17 Jun 2020 21:59:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="b3seLZVm" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D80C221841 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:50124 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jlg5v-0006KC-S5 for patchwork-qemu-devel@patchwork.kernel.org; Wed, 17 Jun 2020 17:59:43 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46400) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfil-0005AE-7b; Wed, 17 Jun 2020 17:35:48 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:29885) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jlfii-0005JY-1P; Wed, 17 Jun 2020 17:35:46 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1592429743; x=1623965743; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+rqBZY1U3F+yetC52f/eo+7Diy5jbNgRvxvBAhx0aao=; b=b3seLZVmq+03txtaufYYUd7MGE6JRgK1h3Avc+dDyx0uMRkVMU/Qj7/k ySXORvH9f3ifGLB1/uSIfb25vHdxwVL72CIsK+rO1EKBBKCgeLkkZRfbk HAJ89IeEsV6tFR+KOoF70v+3cYcGO0mZ367nBf82B7AB06bAJx4sIRfrJ S2uudSlSPi3yTufaPpM8n8Xhb2XyU2veaqgGGmrK1g3XfuLNvfjSE2X03 xJp8y5H+RH6mDI2ElAD53DMb4mc9LTPfl9eNl9Btf266qiNtaGLGrH6PG VsYWpgdtUhAgvK1D33J2tV2BGjrwwpHYY02yg5QJsQF52On4OIjo+SIOY A==; IronPort-SDR: J3sQ4+Lu7HeJrkZfnDfW3AqD08kMhdaJaMq7ULIpAYVI2v3znb601TtewZ0aese5B/79ljpjXX 9eljBak+IIJ8xqSvhyQyHSLZN6PNqLAWMPESHbAZbjAXfL7lgp+opGlPJ9PalUda2gN+Dpe+Rf 4cg7kT2FOLs2PNm4GScj6UCClbgDkZLfjKzz9M0XCDD5prUNE42LvJOvEixpIuZ2Dr8+PwDT8Y b3E+5ti5NsJmydfJLRK0/dcpGjAzmYa8h74QhbLSxeHAMb3mUXHIm4wp02OwPgaOpFYIgDJItb IFg= X-IronPort-AV: E=Sophos;i="5.73,523,1583164800"; d="scan'208";a="249439849" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 18 Jun 2020 05:35:00 +0800 IronPort-SDR: 6qizE2Lmrt6xMT1kymNWSBHd9sWfDyXxsOITRZnjRHkxK9u76VC49UqtOwmlJtVL1WuHNKSQ03 7cc5ZogYH9aFXjfIVrz6B7/QLMZwubl04= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 14:23:41 -0700 IronPort-SDR: wT2sYu4NG9DC7YafUcU2mjQXmlaBLTky2wTVn+Ql4SAZMsAHhDYS/+UKjeITXFcoN4F7gIiW6P SlDiuOkk6DfQ== WDCIronportException: Internal Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 17 Jun 2020 14:34:59 -0700 From: Dmitry Fomichev To: Kevin Wolf , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Maxim Levitsky Subject: [PATCH v2 18/18] hw/block/nvme: Document zoned parameters in usage text Date: Thu, 18 Jun 2020 06:34:15 +0900 Message-Id: <20200617213415.22417-19-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200617213415.22417-1-dmitry.fomichev@wdc.com> References: <20200617213415.22417-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=430b82a1d=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/17 17:34:28 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Matias Bjorling Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Added brief descriptions of the new driver properties now available to users to configure features of Zoned Namespace Command Set in the driver. This patch is for documentation only, no functionality change. Signed-off-by: Dmitry Fomichev --- hw/block/nvme.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 60 insertions(+), 2 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 63e7a6352e..90b1ae24b5 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -9,7 +9,7 @@ */ /** - * Reference Specs: http://www.nvmexpress.org, 1.2, 1.1, 1.0e + * Reference Specs: http://www.nvmexpress.org, 1.4, 1.3, 1.2, 1.1, 1.0e * * http://www.nvmexpress.org/resources/ */ @@ -20,7 +20,8 @@ * -device nvme,drive=,serial=,id=, \ * cmb_size_mb=, \ * [pmrdev=,] \ - * max_ioqpairs= + * max_ioqpairs= \ + * zoned= * * Note cmb_size_mb denotes size of CMB in MB. CMB is assumed to be at * offset 0 in BAR2 and supports only WDS, RDS and SQS for now. @@ -32,6 +33,63 @@ * For example: * -object memory-backend-file,id=,share=on,mem-path=, \ * size= .... -device nvme,...,pmrdev= + * + * Setting "zoned" to true makes the driver emulate zoned namespaces. + * In this case, of the following options are available to configure zoned + * operation: + * zone_size= + * + * zone_capacity= + * + * zone_file= + * Zone metadata file, if specified, allows zone information + * to be persistent across shutdowns and restarts. + * + * zone_descr_ext_size= + * This value needs to be specified in 64B units. If it is zero, + * namespace(s) will not support zone descriptor extensions. + * + * max_active= + * + * max_open= + * + * reset_rcmnd_delay= + * The amount of time that passes between the moment when a zone + * enters Full state and when Reset Zone Recommended attribute + * is set for that zone. + * + * reset_rcmnd_limit= + * If this value is zero (default), RZR attribute is not set for + * any zones. + * + * finish_rcmnd_delay= + * The amount of time that passes between the moment when a zone + * enters an Open or Closed state and when Finish Zone Recommended + * attribute is set for that zone. + * + * finish_rcmnd_limit= + * If this value is zero (default), FZR attribute is not set for + * any zones. + * + * zamds= + * The maximum I/O size that can be supported by Zone Append + * command. Since internally this this value is maintained as + * ZAMDS = log2( / ), some + * values assigned to this property may be rounded down and + * result in a lower maximum ZA data size being in effect. + * + * zone_async_events= + * Enable sending Zone Descriptor Changed AENs to the host. + * + * offline_zones= + * + * rdonly_zones= + * + * cross_zone_read= + * + * fill_pattern= + * Byte pattern to to return for any portions of unwritten data + * during read. */ #include "qemu/osdep.h"