From patchwork Thu Jun 23 21:34:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 12893212 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 54DFFC43334 for ; Thu, 23 Jun 2022 21:42:26 +0000 (UTC) Received: from localhost ([::1]:39230 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1o4Uan-0004kd-Ay for qemu-devel@archiver.kernel.org; Thu, 23 Jun 2022 17:42:25 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54814) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UTY-0004D8-LX; Thu, 23 Jun 2022 17:35:02 -0400 Received: from wout5-smtp.messagingengine.com ([64.147.123.21]:33415) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UTW-0006Pa-JE; Thu, 23 Jun 2022 17:34:56 -0400 Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailout.west.internal (Postfix) with ESMTP id AC65A3200926; Thu, 23 Jun 2022 17:34:51 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Thu, 23 Jun 2022 17:34:53 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=cc:cc:content-transfer-encoding:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm1; t=1656020091; x= 1656106491; bh=KIBEtfUDjmhyDepSen6nNzMmoNLsXbyhIc3MTolSxrM=; b=f tu6EkiCpjwMyBZxarg8kGUIl3mq4upIVOE5/QWYOPPW7dXTXBHnSgnQM4eJ9ue2n dJ1uPdRpzFg8Ptm9SJhd+3YXMM9GHtpLQMzyRUfIsy3SWvEdNP48yvUQ+6KpuuEZ mfg3xiHBQdwfTQlrzdS836SuS5mNgUVmx/0WC7jB8VtSrQM3gUPDB6kc9K34qL7d nO+Re35D8lQAwO4xh3Kf2EuuOLa+WfIEnjhnLrUXkIHX0b0430jE2lqE4JjkDA0J TPgaomORa1X/fg9wP+PHfnKAXy9DNZTTLYxx9JckEiSIWZ+D8n4yLaenqFElEgh7 bjnGquMFF1lY12fGd/hkA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:sender:subject :subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; t=1656020091; x=1656106491; bh=KIBEtfUDjmhyD epSen6nNzMmoNLsXbyhIc3MTolSxrM=; b=LPSC41cneLyLOL3XveICD6Z7qw7Ob oeiTShe2/pPRi30OTHZJst+ZYY3nsqpwi0zvJ0yvKtoeFWnubAbUiStLW5luY4vx Qg5lrGmgD2GDXqDgUjrZdwJC0JJgtpYElOeRl9mlw+YrV2Bk0PZ9rIvTSOLERGBr xsdaBwuhAFv+qRFfQfZqKNsglcQbP6oOMwyAWfniIsUZ+52gws4BG25/Yi4Rdda/ lOiNuQ0siy0MwHgyvUydc5tZ/+VhvYAculItJOlwxdkx/uIy5IUbU6IGZ+WQbaxi 3kn/jN4Dx6mGA6I/xAWRW5kMK93DP7Qv27nbp1fvuNfUz5uX0ZiFyA43A== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudefjedgudeigecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpefmlhgr uhhsucflvghnshgvnhcuoehithhssehirhhrvghlvghvrghnthdrughkqeenucggtffrrg htthgvrhhnpeejgfeilefgieevheekueevheehkeefveegiefgheefgfejjeehffefgedu jedugeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpe hithhssehirhhrvghlvghvrghnthdrughk X-ME-Proxy: Feedback-ID: idc91472f:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 23 Jun 2022 17:34:48 -0400 (EDT) From: Klaus Jensen To: Peter Maydell , qemu-devel@nongnu.org Cc: Stefan Hajnoczi , Igor Mammedov , Ani Sinha , Hanna Reitz , Kevin Wolf , "Michael S. Tsirkin" , Klaus Jensen , qemu-block@nongnu.org, Keith Busch , Fam Zheng , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Marcel Apfelbaum , Lukasz Maniak , Klaus Jensen Subject: [PULL 01/15] hw/nvme: Add support for SR-IOV Date: Thu, 23 Jun 2022 23:34:28 +0200 Message-Id: <20220623213442.67789-2-its@irrelevant.dk> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220623213442.67789-1-its@irrelevant.dk> References: <20220623213442.67789-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.21; envelope-from=its@irrelevant.dk; helo=wout5-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Lukasz Maniak This patch implements initial support for Single Root I/O Virtualization on an NVMe device. Essentially, it allows to define the maximum number of virtual functions supported by the NVMe controller via sriov_max_vfs parameter. Passing a non-zero value to sriov_max_vfs triggers reporting of SR-IOV capability by a physical controller and ARI capability by both the physical and virtual function devices. NVMe controllers created via virtual functions mirror functionally the physical controller, which may not entirely be the case, thus consideration would be needed on the way to limit the capabilities of the VF. NVMe subsystem is required for the use of SR-IOV. Signed-off-by: Lukasz Maniak Reviewed-by: Klaus Jensen Acked-by: Michael S. Tsirkin Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 85 ++++++++++++++++++++++++++++++++++++++-- hw/nvme/nvme.h | 3 +- include/hw/pci/pci_ids.h | 1 + 3 files changed, 85 insertions(+), 4 deletions(-) diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index 1e6e0fcad918..855ab55aa46b 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -35,6 +35,7 @@ * mdts=,vsl=, \ * zoned.zasl=, \ * zoned.auto_transition=, \ + * sriov_max_vfs= \ * subsys= * -device nvme-ns,drive=,bus=,nsid=,\ * zoned=, \ @@ -106,6 +107,12 @@ * transitioned to zone state closed for resource management purposes. * Defaults to 'on'. * + * - `sriov_max_vfs` + * Indicates the maximum number of PCIe virtual functions supported + * by the controller. The default value is 0. Specifying a non-zero value + * enables reporting of both SR-IOV and ARI capabilities by the NVMe device. + * Virtual function controllers will not report SR-IOV capability. + * * nvme namespace device parameters * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * - `shared` @@ -160,6 +167,7 @@ #include "sysemu/block-backend.h" #include "sysemu/hostmem.h" #include "hw/pci/msix.h" +#include "hw/pci/pcie_sriov.h" #include "migration/vmstate.h" #include "nvme.h" @@ -176,6 +184,9 @@ #define NVME_TEMPERATURE_CRITICAL 0x175 #define NVME_NUM_FW_SLOTS 1 #define NVME_DEFAULT_MAX_ZA_SIZE (128 * KiB) +#define NVME_MAX_VFS 127 +#define NVME_VF_OFFSET 0x1 +#define NVME_VF_STRIDE 1 #define NVME_GUEST_ERR(trace, fmt, ...) \ do { \ @@ -5888,6 +5899,10 @@ static void nvme_ctrl_reset(NvmeCtrl *n) g_free(event); } + if (!pci_is_vf(&n->parent_obj) && n->params.sriov_max_vfs) { + pcie_sriov_pf_disable_vfs(&n->parent_obj); + } + n->aer_queued = 0; n->outstanding_aers = 0; n->qs_created = false; @@ -6569,6 +6584,29 @@ static void nvme_check_constraints(NvmeCtrl *n, Error **errp) error_setg(errp, "vsl must be non-zero"); return; } + + if (params->sriov_max_vfs) { + if (!n->subsys) { + error_setg(errp, "subsystem is required for the use of SR-IOV"); + return; + } + + if (params->sriov_max_vfs > NVME_MAX_VFS) { + error_setg(errp, "sriov_max_vfs must be between 0 and %d", + NVME_MAX_VFS); + return; + } + + if (params->cmb_size_mb) { + error_setg(errp, "CMB is not supported with SR-IOV"); + return; + } + + if (n->pmr.dev) { + error_setg(errp, "PMR is not supported with SR-IOV"); + return; + } + } } static void nvme_init_state(NvmeCtrl *n) @@ -6626,6 +6664,20 @@ static void nvme_init_pmr(NvmeCtrl *n, PCIDevice *pci_dev) memory_region_set_enabled(&n->pmr.dev->mr, false); } +static void nvme_init_sriov(NvmeCtrl *n, PCIDevice *pci_dev, uint16_t offset, + uint64_t bar_size) +{ + uint16_t vf_dev_id = n->params.use_intel_id ? + PCI_DEVICE_ID_INTEL_NVME : PCI_DEVICE_ID_REDHAT_NVME; + + pcie_sriov_pf_init(pci_dev, offset, "nvme", vf_dev_id, + n->params.sriov_max_vfs, n->params.sriov_max_vfs, + NVME_VF_OFFSET, NVME_VF_STRIDE); + + pcie_sriov_pf_init_vf_bar(pci_dev, 0, PCI_BASE_ADDRESS_SPACE_MEMORY | + PCI_BASE_ADDRESS_MEM_TYPE_64, bar_size); +} + static int nvme_init_pci(NvmeCtrl *n, PCIDevice *pci_dev, Error **errp) { uint8_t *pci_conf = pci_dev->config; @@ -6640,7 +6692,7 @@ static int nvme_init_pci(NvmeCtrl *n, PCIDevice *pci_dev, Error **errp) if (n->params.use_intel_id) { pci_config_set_vendor_id(pci_conf, PCI_VENDOR_ID_INTEL); - pci_config_set_device_id(pci_conf, 0x5845); + pci_config_set_device_id(pci_conf, PCI_DEVICE_ID_INTEL_NVME); } else { pci_config_set_vendor_id(pci_conf, PCI_VENDOR_ID_REDHAT); pci_config_set_device_id(pci_conf, PCI_DEVICE_ID_REDHAT_NVME); @@ -6648,6 +6700,9 @@ static int nvme_init_pci(NvmeCtrl *n, PCIDevice *pci_dev, Error **errp) pci_config_set_class(pci_conf, PCI_CLASS_STORAGE_EXPRESS); pcie_endpoint_cap_init(pci_dev, 0x80); + if (n->params.sriov_max_vfs) { + pcie_ari_init(pci_dev, 0x100, 1); + } bar_size = QEMU_ALIGN_UP(n->reg_size, 4 * KiB); msix_table_offset = bar_size; @@ -6666,8 +6721,12 @@ static int nvme_init_pci(NvmeCtrl *n, PCIDevice *pci_dev, Error **errp) n->reg_size); memory_region_add_subregion(&n->bar0, 0, &n->iomem); - pci_register_bar(pci_dev, 0, PCI_BASE_ADDRESS_SPACE_MEMORY | - PCI_BASE_ADDRESS_MEM_TYPE_64, &n->bar0); + if (pci_is_vf(pci_dev)) { + pcie_sriov_vf_register_bar(pci_dev, 0, &n->bar0); + } else { + pci_register_bar(pci_dev, 0, PCI_BASE_ADDRESS_SPACE_MEMORY | + PCI_BASE_ADDRESS_MEM_TYPE_64, &n->bar0); + } ret = msix_init(pci_dev, n->params.msix_qsize, &n->bar0, 0, msix_table_offset, &n->bar0, 0, msix_pba_offset, 0, &err); @@ -6688,6 +6747,10 @@ static int nvme_init_pci(NvmeCtrl *n, PCIDevice *pci_dev, Error **errp) nvme_init_pmr(n, pci_dev); } + if (!pci_is_vf(pci_dev) && n->params.sriov_max_vfs) { + nvme_init_sriov(n, pci_dev, 0x120, bar_size); + } + return 0; } @@ -6838,6 +6901,16 @@ static void nvme_realize(PCIDevice *pci_dev, Error **errp) NvmeCtrl *n = NVME(pci_dev); NvmeNamespace *ns; Error *local_err = NULL; + NvmeCtrl *pn = NVME(pcie_sriov_get_pf(pci_dev)); + + if (pci_is_vf(pci_dev)) { + /* + * VFs derive settings from the parent. PF's lifespan exceeds + * that of VF's, so it's safe to share params.serial. + */ + memcpy(&n->params, &pn->params, sizeof(NvmeParams)); + n->subsys = pn->subsys; + } nvme_check_constraints(n, &local_err); if (local_err) { @@ -6902,6 +6975,11 @@ static void nvme_exit(PCIDevice *pci_dev) if (n->pmr.dev) { host_memory_backend_set_mapped(n->pmr.dev, false); } + + if (!pci_is_vf(pci_dev) && n->params.sriov_max_vfs) { + pcie_sriov_pf_exit(pci_dev); + } + msix_uninit(pci_dev, &n->bar0, &n->bar0); memory_region_del_subregion(&n->bar0, &n->iomem); } @@ -6926,6 +7004,7 @@ static Property nvme_props[] = { DEFINE_PROP_UINT8("zoned.zasl", NvmeCtrl, params.zasl, 0), DEFINE_PROP_BOOL("zoned.auto_transition", NvmeCtrl, params.auto_transition_zones, true), + DEFINE_PROP_UINT8("sriov_max_vfs", NvmeCtrl, params.sriov_max_vfs, 0), DEFINE_PROP_END_OF_LIST(), }; diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h index e41771604f59..7e4bbd5606a0 100644 --- a/hw/nvme/nvme.h +++ b/hw/nvme/nvme.h @@ -24,7 +24,7 @@ #include "block/nvme.h" -#define NVME_MAX_CONTROLLERS 32 +#define NVME_MAX_CONTROLLERS 256 #define NVME_MAX_NAMESPACES 256 #define NVME_EUI64_DEFAULT ((uint64_t)0x5254000000000000) @@ -406,6 +406,7 @@ typedef struct NvmeParams { uint8_t zasl; bool auto_transition_zones; bool legacy_cmb; + uint8_t sriov_max_vfs; } NvmeParams; typedef struct NvmeCtrl { diff --git a/include/hw/pci/pci_ids.h b/include/hw/pci/pci_ids.h index 898083b86f87..d5ddea558bf6 100644 --- a/include/hw/pci/pci_ids.h +++ b/include/hw/pci/pci_ids.h @@ -238,6 +238,7 @@ #define PCI_DEVICE_ID_INTEL_82801BA_11 0x244e #define PCI_DEVICE_ID_INTEL_82801D 0x24CD #define PCI_DEVICE_ID_INTEL_ESB_9 0x25ab +#define PCI_DEVICE_ID_INTEL_NVME 0x5845 #define PCI_DEVICE_ID_INTEL_82371SB_0 0x7000 #define PCI_DEVICE_ID_INTEL_82371SB_1 0x7010 #define PCI_DEVICE_ID_INTEL_82371SB_2 0x7020 From patchwork Thu Jun 23 21:34:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 12893211 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 94669C43334 for ; Thu, 23 Jun 2022 21:39:46 +0000 (UTC) Received: from localhost ([::1]:60592 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1o4UYC-0008CI-Bt for qemu-devel@archiver.kernel.org; Thu, 23 Jun 2022 17:39:45 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54858) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UTg-0004JG-PC; Thu, 23 Jun 2022 17:35:04 -0400 Received: from wout5-smtp.messagingengine.com ([64.147.123.21]:39701) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UTe-0006Pt-R9; Thu, 23 Jun 2022 17:35:04 -0400 Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.west.internal (Postfix) with ESMTP id 8F9983200927; Thu, 23 Jun 2022 17:34:56 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute5.internal (MEProxy); Thu, 23 Jun 2022 17:34:58 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=cc:cc:content-transfer-encoding:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm1; t=1656020096; x= 1656106496; bh=GeUykUW1iTbxgGzpPtSKl7h/Gfxt2BzTniAYlJ/3R8A=; b=V DdBBGxHip+OXdH1XTZlTbcUswB1mx13i/cEr9UW9sC60q454CdjaHaaNhcu+lF7C I5x6C9JweAjZ4sIQLnMNnU2D8xJKYF5joWPG0eBWszEcUUksAVsWqWnA/b5+iqag jJMqfQ27ExncVTBWAacHXsEI3wCOlhugrut4SsFsCpss5V1xGCz7bYlcXKFVDa2J JvmcAh2aZr1J5GlB05tOmw/fweyEhDmBLplnwyKHzmo3XL9qZrriQ7RMJ3seo8xU F44lCVAHmnI3WK2omCOX+0mm+dDZww1AdKB1z17JyO5MdZgHRUQok28EyjfwYPsJ mHvDXg5Bl6vm/XkSQh8Eg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:sender:subject :subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; t=1656020096; x=1656106496; bh=GeUykUW1iTbxg GzpPtSKl7h/Gfxt2BzTniAYlJ/3R8A=; b=l9iBXbVoo1Q/2onBAcf0CETAiPLwe XXpnhMnhRZ+MgQbKXrlH8oGiaJjb5S6gk1ei2B34atri+1Eoxh6HbAoGXd25Q+72 XRJQnkpOgTHcYxbvU7EBTd9bYgkMrtCf8YI3156+gcgs8Mewkr9dH6C0AfrGlmCy VmP0eXQMsK6T+J9xL4Dvk6OMtlwSMZ0+JHBi87ONSEwX9ujVIUjqTAYLZ8on1YVA uITOOASaVnHE/VOjnA7P9CmEjDTpa9qxpU7JCLuuH1LIdJkaYpuun8Kg6afOsO0Z 1bn6JfMwkfHcQHWOxglx6YlRjAaQFq9kQQccYf6SSBGdlY3m9tgw1GBkw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudefjedgudeifecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpefmlhgr uhhsucflvghnshgvnhcuoehithhssehirhhrvghlvghvrghnthdrughkqeenucggtffrrg htthgvrhhnpeegvddtheeukedufedugeeigfdvhfetgfegtddvfeejheevgeeijedvfedt tdelheenucffohhmrghinhepqhguvghvrdhiugenucevlhhushhtvghrufhiiigvpedtne curfgrrhgrmhepmhgrihhlfhhrohhmpehithhssehirhhrvghlvghvrghnthdrughk X-ME-Proxy: Feedback-ID: idc91472f:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 23 Jun 2022 17:34:53 -0400 (EDT) From: Klaus Jensen To: Peter Maydell , qemu-devel@nongnu.org Cc: Stefan Hajnoczi , Igor Mammedov , Ani Sinha , Hanna Reitz , Kevin Wolf , "Michael S. Tsirkin" , Klaus Jensen , qemu-block@nongnu.org, Keith Busch , Fam Zheng , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Marcel Apfelbaum , Lukasz Maniak , Klaus Jensen Subject: [PULL 02/15] hw/nvme: Add support for Primary Controller Capabilities Date: Thu, 23 Jun 2022 23:34:29 +0200 Message-Id: <20220623213442.67789-3-its@irrelevant.dk> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220623213442.67789-1-its@irrelevant.dk> References: <20220623213442.67789-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.21; envelope-from=its@irrelevant.dk; helo=wout5-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Lukasz Maniak Implementation of Primary Controller Capabilities data structure (Identify command with CNS value of 14h). Currently, the command returns only ID of a primary controller. Handling of remaining fields are added in subsequent patches implementing virtualization enhancements. Signed-off-by: Lukasz Maniak Reviewed-by: Klaus Jensen Acked-by: Michael S. Tsirkin Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 23 ++++++++++++++++++----- hw/nvme/nvme.h | 2 ++ hw/nvme/trace-events | 1 + include/block/nvme.h | 23 +++++++++++++++++++++++ 4 files changed, 44 insertions(+), 5 deletions(-) diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index 855ab55aa46b..d004b484092c 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -4804,6 +4804,14 @@ static uint16_t nvme_identify_ctrl_list(NvmeCtrl *n, NvmeRequest *req, return nvme_c2h(n, (uint8_t *)list, sizeof(list), req); } +static uint16_t nvme_identify_pri_ctrl_cap(NvmeCtrl *n, NvmeRequest *req) +{ + trace_pci_nvme_identify_pri_ctrl_cap(le16_to_cpu(n->pri_ctrl_cap.cntlid)); + + return nvme_c2h(n, (uint8_t *)&n->pri_ctrl_cap, + sizeof(NvmePriCtrlCap), req); +} + static uint16_t nvme_identify_ns_csi(NvmeCtrl *n, NvmeRequest *req, bool active) { @@ -5020,6 +5028,8 @@ static uint16_t nvme_identify(NvmeCtrl *n, NvmeRequest *req) return nvme_identify_ctrl_list(n, req, true); case NVME_ID_CNS_CTRL_LIST: return nvme_identify_ctrl_list(n, req, false); + case NVME_ID_CNS_PRIMARY_CTRL_CAP: + return nvme_identify_pri_ctrl_cap(n, req); case NVME_ID_CNS_CS_NS: return nvme_identify_ns_csi(n, req, true); case NVME_ID_CNS_CS_NS_PRESENT: @@ -6611,6 +6621,8 @@ static void nvme_check_constraints(NvmeCtrl *n, Error **errp) static void nvme_init_state(NvmeCtrl *n) { + NvmePriCtrlCap *cap = &n->pri_ctrl_cap; + /* add one to max_ioqpairs to account for the admin queue pair */ n->reg_size = pow2ceil(sizeof(NvmeBar) + 2 * (n->params.max_ioqpairs + 1) * NVME_DB_SIZE); @@ -6620,6 +6632,8 @@ static void nvme_init_state(NvmeCtrl *n) n->features.temp_thresh_hi = NVME_TEMPERATURE_WARNING; n->starttime_ms = qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL); n->aer_reqs = g_new0(NvmeRequest *, n->params.aerl + 1); + + cap->cntlid = cpu_to_le16(n->cntlid); } static void nvme_init_cmb(NvmeCtrl *n, PCIDevice *pci_dev) @@ -6921,15 +6935,14 @@ static void nvme_realize(PCIDevice *pci_dev, Error **errp) qbus_init(&n->bus, sizeof(NvmeBus), TYPE_NVME_BUS, &pci_dev->qdev, n->parent_obj.qdev.id); + if (nvme_init_subsys(n, errp)) { + error_propagate(errp, local_err); + return; + } nvme_init_state(n); if (nvme_init_pci(n, pci_dev, errp)) { return; } - - if (nvme_init_subsys(n, errp)) { - error_propagate(errp, local_err); - return; - } nvme_init_ctrl(n, pci_dev); /* setup a namespace if the controller drive property was given */ diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h index 7e4bbd5606a0..6ef458d3bc24 100644 --- a/hw/nvme/nvme.h +++ b/hw/nvme/nvme.h @@ -478,6 +478,8 @@ typedef struct NvmeCtrl { uint32_t async_config; NvmeHostBehaviorSupport hbs; } features; + + NvmePriCtrlCap pri_ctrl_cap; } NvmeCtrl; static inline NvmeNamespace *nvme_ns(NvmeCtrl *n, uint32_t nsid) diff --git a/hw/nvme/trace-events b/hw/nvme/trace-events index ff1b4589692b..1834b17cf214 100644 --- a/hw/nvme/trace-events +++ b/hw/nvme/trace-events @@ -56,6 +56,7 @@ pci_nvme_identify_ctrl(void) "identify controller" pci_nvme_identify_ctrl_csi(uint8_t csi) "identify controller, csi=0x%"PRIx8"" pci_nvme_identify_ns(uint32_t ns) "nsid %"PRIu32"" pci_nvme_identify_ctrl_list(uint8_t cns, uint16_t cntid) "cns 0x%"PRIx8" cntid %"PRIu16"" +pci_nvme_identify_pri_ctrl_cap(uint16_t cntlid) "identify primary controller capabilities cntlid=%"PRIu16"" pci_nvme_identify_ns_csi(uint32_t ns, uint8_t csi) "nsid=%"PRIu32", csi=0x%"PRIx8"" pci_nvme_identify_nslist(uint32_t ns) "nsid %"PRIu32"" pci_nvme_identify_nslist_csi(uint16_t ns, uint8_t csi) "nsid=%"PRIu16", csi=0x%"PRIx8"" diff --git a/include/block/nvme.h b/include/block/nvme.h index 3737351cc815..524a04fb94ec 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -1033,6 +1033,7 @@ enum NvmeIdCns { NVME_ID_CNS_NS_PRESENT = 0x11, NVME_ID_CNS_NS_ATTACHED_CTRL_LIST = 0x12, NVME_ID_CNS_CTRL_LIST = 0x13, + NVME_ID_CNS_PRIMARY_CTRL_CAP = 0x14, NVME_ID_CNS_CS_NS_PRESENT_LIST = 0x1a, NVME_ID_CNS_CS_NS_PRESENT = 0x1b, NVME_ID_CNS_IO_COMMAND_SET = 0x1c, @@ -1553,6 +1554,27 @@ typedef enum NvmeZoneState { NVME_ZONE_STATE_OFFLINE = 0x0f, } NvmeZoneState; +typedef struct QEMU_PACKED NvmePriCtrlCap { + uint16_t cntlid; + uint16_t portid; + uint8_t crt; + uint8_t rsvd5[27]; + uint32_t vqfrt; + uint32_t vqrfa; + uint16_t vqrfap; + uint16_t vqprt; + uint16_t vqfrsm; + uint16_t vqgran; + uint8_t rsvd48[16]; + uint32_t vifrt; + uint32_t virfa; + uint16_t virfap; + uint16_t viprt; + uint16_t vifrsm; + uint16_t vigran; + uint8_t rsvd80[4016]; +} NvmePriCtrlCap; + static inline void _nvme_check_size(void) { QEMU_BUILD_BUG_ON(sizeof(NvmeBar) != 4096); @@ -1588,5 +1610,6 @@ static inline void _nvme_check_size(void) QEMU_BUILD_BUG_ON(sizeof(NvmeIdNsDescr) != 4); QEMU_BUILD_BUG_ON(sizeof(NvmeZoneDescr) != 64); QEMU_BUILD_BUG_ON(sizeof(NvmeDifTuple) != 16); + QEMU_BUILD_BUG_ON(sizeof(NvmePriCtrlCap) != 4096); } #endif From patchwork Thu Jun 23 21:34:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 12893209 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 87CF1C43334 for ; Thu, 23 Jun 2022 21:38:23 +0000 (UTC) Received: from localhost ([::1]:59604 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1o4UWn-0007TF-LV for qemu-devel@archiver.kernel.org; Thu, 23 Jun 2022 17:38:19 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54890) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UTi-0004Mj-Lz; Thu, 23 Jun 2022 17:35:07 -0400 Received: from wout5-smtp.messagingengine.com ([64.147.123.21]:33019) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UTg-0006Y7-F8; Thu, 23 Jun 2022 17:35:06 -0400 Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.west.internal (Postfix) with ESMTP id 59A543200976; Thu, 23 Jun 2022 17:35:01 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute1.internal (MEProxy); Thu, 23 Jun 2022 17:35:02 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=cc:cc:content-transfer-encoding:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm1; t=1656020100; x= 1656106500; bh=bbfm6aSOSxbdrtZ8gOHLAyAWmRS6LN6mMYebsPee2aE=; b=P bcVc/Sv84X8fSIEDKC1sDgBtfWAq9mCC9M/CO7B9Tp1E24DKv0PN0n0/6vOaCoRA 0k69boVt550BQ1xiLkfKbVtxG/m2jy6zAlAX6GStXBp82N/ng2Javc6vKtuHQNRu cB50M+moCCW3+JqZ+EoySS0tXFkJSnTMUcAJtqw5doD5gfPmiomRSH5XgxoxUXk1 LkYFgYMi1zNOxPzDjwkkd4WdtGCYqdYl6QIqtQBV9B/fjR41hbizYugiXr+ATfEd dSfmyEulPfTZSpFR/Rl6mjgGg9ephArSKFVNJndGS7/C4FhO/on2y+5bGZhWx8Ow LXEpTmazcuSjd1fcGHmxg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:sender:subject :subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; t=1656020100; x=1656106500; bh=bbfm6aSOSxbdr tZ8gOHLAyAWmRS6LN6mMYebsPee2aE=; b=Y17zd8eGf+s+rTk/AkME+NJqQwOiw wUUzSdEEYpRcznF8c0+CBd9tSxNhiJCiZ7oJ5Qd7m0ymW9oYj9mAHJJlmUJ6BTFj 4g8xAitsOvEL/l8mMO9YD4+ZVXGY8HlKGtqCrGVcNmowdfLsuKp9dPVl2ZJDzOr4 maNvXb+dK1mEmyZ5471hNF/XwxyvCPuRbAkKMe7oLFDtw++5or1e849tqEAbNC3H R/G6IYjkKVCUqlTaBjrX2Jl83VBsHxz+IFOPBYTXxv+TVaxRLPk6DvU0Y0D9JOkC kTd9UvIY6xeV+5Lf7xPLrvMuL3G6E/T6x3+i4700RaBuGr9NG1tw30e8g== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudefjedgudeifecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpefmlhgr uhhsucflvghnshgvnhcuoehithhssehirhhrvghlvghvrghnthdrughkqeenucggtffrrg htthgvrhhnpeejgfeilefgieevheekueevheehkeefveegiefgheefgfejjeehffefgedu jedugeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpe hithhssehirhhrvghlvghvrghnthdrughk X-ME-Proxy: Feedback-ID: idc91472f:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 23 Jun 2022 17:34:58 -0400 (EDT) From: Klaus Jensen To: Peter Maydell , qemu-devel@nongnu.org Cc: Stefan Hajnoczi , Igor Mammedov , Ani Sinha , Hanna Reitz , Kevin Wolf , "Michael S. Tsirkin" , Klaus Jensen , qemu-block@nongnu.org, Keith Busch , Fam Zheng , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Marcel Apfelbaum , Lukasz Maniak , Klaus Jensen Subject: [PULL 03/15] hw/nvme: Add support for Secondary Controller List Date: Thu, 23 Jun 2022 23:34:30 +0200 Message-Id: <20220623213442.67789-4-its@irrelevant.dk> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220623213442.67789-1-its@irrelevant.dk> References: <20220623213442.67789-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.21; envelope-from=its@irrelevant.dk; helo=wout5-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Lukasz Maniak Introduce handling for Secondary Controller List (Identify command with CNS value of 15h). Secondary controller ids are unique in the subsystem, hence they are reserved by it upon initialization of the primary controller to the number of sriov_max_vfs. ID reservation requires the addition of an intermediate controller slot state, so the reserved controller has the address 0xFFFF. A secondary controller is in the reserved state when it has no virtual function assigned, but its primary controller is realized. Secondary controller reservations are released to NULL when its primary controller is unregistered. Signed-off-by: Lukasz Maniak Acked-by: Michael S. Tsirkin Reviewed-by: Klaus Jensen Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 35 +++++++++++++++++++++ hw/nvme/ns.c | 2 +- hw/nvme/nvme.h | 18 +++++++++++ hw/nvme/subsys.c | 75 ++++++++++++++++++++++++++++++++++++++------ hw/nvme/trace-events | 1 + include/block/nvme.h | 20 ++++++++++++ 6 files changed, 141 insertions(+), 10 deletions(-) diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index d004b484092c..b031212758cb 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -4812,6 +4812,29 @@ static uint16_t nvme_identify_pri_ctrl_cap(NvmeCtrl *n, NvmeRequest *req) sizeof(NvmePriCtrlCap), req); } +static uint16_t nvme_identify_sec_ctrl_list(NvmeCtrl *n, NvmeRequest *req) +{ + NvmeIdentify *c = (NvmeIdentify *)&req->cmd; + uint16_t pri_ctrl_id = le16_to_cpu(n->pri_ctrl_cap.cntlid); + uint16_t min_id = le16_to_cpu(c->ctrlid); + uint8_t num_sec_ctrl = n->sec_ctrl_list.numcntl; + NvmeSecCtrlList list = {0}; + uint8_t i; + + for (i = 0; i < num_sec_ctrl; i++) { + if (n->sec_ctrl_list.sec[i].scid >= min_id) { + list.numcntl = num_sec_ctrl - i; + memcpy(&list.sec, n->sec_ctrl_list.sec + i, + list.numcntl * sizeof(NvmeSecCtrlEntry)); + break; + } + } + + trace_pci_nvme_identify_sec_ctrl_list(pri_ctrl_id, list.numcntl); + + return nvme_c2h(n, (uint8_t *)&list, sizeof(list), req); +} + static uint16_t nvme_identify_ns_csi(NvmeCtrl *n, NvmeRequest *req, bool active) { @@ -5030,6 +5053,8 @@ static uint16_t nvme_identify(NvmeCtrl *n, NvmeRequest *req) return nvme_identify_ctrl_list(n, req, false); case NVME_ID_CNS_PRIMARY_CTRL_CAP: return nvme_identify_pri_ctrl_cap(n, req); + case NVME_ID_CNS_SECONDARY_CTRL_LIST: + return nvme_identify_sec_ctrl_list(n, req); case NVME_ID_CNS_CS_NS: return nvme_identify_ns_csi(n, req, true); case NVME_ID_CNS_CS_NS_PRESENT: @@ -6622,6 +6647,9 @@ static void nvme_check_constraints(NvmeCtrl *n, Error **errp) static void nvme_init_state(NvmeCtrl *n) { NvmePriCtrlCap *cap = &n->pri_ctrl_cap; + NvmeSecCtrlList *list = &n->sec_ctrl_list; + NvmeSecCtrlEntry *sctrl; + int i; /* add one to max_ioqpairs to account for the admin queue pair */ n->reg_size = pow2ceil(sizeof(NvmeBar) + @@ -6633,6 +6661,13 @@ static void nvme_init_state(NvmeCtrl *n) n->starttime_ms = qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL); n->aer_reqs = g_new0(NvmeRequest *, n->params.aerl + 1); + list->numcntl = cpu_to_le16(n->params.sriov_max_vfs); + for (i = 0; i < n->params.sriov_max_vfs; i++) { + sctrl = &list->sec[i]; + sctrl->pcid = cpu_to_le16(n->cntlid); + sctrl->vfn = cpu_to_le16(i + 1); + } + cap->cntlid = cpu_to_le16(n->cntlid); } diff --git a/hw/nvme/ns.c b/hw/nvme/ns.c index 1b9c9d11567f..870c3ca1a2f0 100644 --- a/hw/nvme/ns.c +++ b/hw/nvme/ns.c @@ -597,7 +597,7 @@ static void nvme_ns_realize(DeviceState *dev, Error **errp) for (i = 0; i < ARRAY_SIZE(subsys->ctrls); i++) { NvmeCtrl *ctrl = subsys->ctrls[i]; - if (ctrl) { + if (ctrl && ctrl != SUBSYS_SLOT_RSVD) { nvme_attach_ns(ctrl, ns); } } diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h index 6ef458d3bc24..b66421cdf9e8 100644 --- a/hw/nvme/nvme.h +++ b/hw/nvme/nvme.h @@ -43,6 +43,7 @@ typedef struct NvmeBus { #define TYPE_NVME_SUBSYS "nvme-subsys" #define NVME_SUBSYS(obj) \ OBJECT_CHECK(NvmeSubsystem, (obj), TYPE_NVME_SUBSYS) +#define SUBSYS_SLOT_RSVD (void *)0xFFFF typedef struct NvmeSubsystem { DeviceState parent_obj; @@ -68,6 +69,10 @@ static inline NvmeCtrl *nvme_subsys_ctrl(NvmeSubsystem *subsys, return NULL; } + if (subsys->ctrls[cntlid] == SUBSYS_SLOT_RSVD) { + return NULL; + } + return subsys->ctrls[cntlid]; } @@ -480,6 +485,7 @@ typedef struct NvmeCtrl { } features; NvmePriCtrlCap pri_ctrl_cap; + NvmeSecCtrlList sec_ctrl_list; } NvmeCtrl; static inline NvmeNamespace *nvme_ns(NvmeCtrl *n, uint32_t nsid) @@ -514,6 +520,18 @@ static inline uint16_t nvme_cid(NvmeRequest *req) return le16_to_cpu(req->cqe.cid); } +static inline NvmeSecCtrlEntry *nvme_sctrl(NvmeCtrl *n) +{ + PCIDevice *pci_dev = &n->parent_obj; + NvmeCtrl *pf = NVME(pcie_sriov_get_pf(pci_dev)); + + if (pci_is_vf(pci_dev)) { + return &pf->sec_ctrl_list.sec[pcie_sriov_vf_number(pci_dev)]; + } + + return NULL; +} + void nvme_attach_ns(NvmeCtrl *n, NvmeNamespace *ns); uint16_t nvme_bounce_data(NvmeCtrl *n, void *ptr, uint32_t len, NvmeTxDirection dir, NvmeRequest *req); diff --git a/hw/nvme/subsys.c b/hw/nvme/subsys.c index 691a90d20947..9d2643678b53 100644 --- a/hw/nvme/subsys.c +++ b/hw/nvme/subsys.c @@ -11,20 +11,71 @@ #include "nvme.h" +static int nvme_subsys_reserve_cntlids(NvmeCtrl *n, int start, int num) +{ + NvmeSubsystem *subsys = n->subsys; + NvmeSecCtrlList *list = &n->sec_ctrl_list; + NvmeSecCtrlEntry *sctrl; + int i, cnt = 0; + + for (i = start; i < ARRAY_SIZE(subsys->ctrls) && cnt < num; i++) { + if (!subsys->ctrls[i]) { + sctrl = &list->sec[cnt]; + sctrl->scid = cpu_to_le16(i); + subsys->ctrls[i] = SUBSYS_SLOT_RSVD; + cnt++; + } + } + + return cnt; +} + +static void nvme_subsys_unreserve_cntlids(NvmeCtrl *n) +{ + NvmeSubsystem *subsys = n->subsys; + NvmeSecCtrlList *list = &n->sec_ctrl_list; + NvmeSecCtrlEntry *sctrl; + int i, cntlid; + + for (i = 0; i < n->params.sriov_max_vfs; i++) { + sctrl = &list->sec[i]; + cntlid = le16_to_cpu(sctrl->scid); + + if (cntlid) { + assert(subsys->ctrls[cntlid] == SUBSYS_SLOT_RSVD); + subsys->ctrls[cntlid] = NULL; + sctrl->scid = 0; + } + } +} + int nvme_subsys_register_ctrl(NvmeCtrl *n, Error **errp) { NvmeSubsystem *subsys = n->subsys; - int cntlid, nsid; + NvmeSecCtrlEntry *sctrl = nvme_sctrl(n); + int cntlid, nsid, num_rsvd, num_vfs = n->params.sriov_max_vfs; - for (cntlid = 0; cntlid < ARRAY_SIZE(subsys->ctrls); cntlid++) { - if (!subsys->ctrls[cntlid]) { - break; + if (pci_is_vf(&n->parent_obj)) { + cntlid = le16_to_cpu(sctrl->scid); + } else { + for (cntlid = 0; cntlid < ARRAY_SIZE(subsys->ctrls); cntlid++) { + if (!subsys->ctrls[cntlid]) { + break; + } } - } - if (cntlid == ARRAY_SIZE(subsys->ctrls)) { - error_setg(errp, "no more free controller id"); - return -1; + if (cntlid == ARRAY_SIZE(subsys->ctrls)) { + error_setg(errp, "no more free controller id"); + return -1; + } + + num_rsvd = nvme_subsys_reserve_cntlids(n, cntlid + 1, num_vfs); + if (num_rsvd != num_vfs) { + nvme_subsys_unreserve_cntlids(n); + error_setg(errp, + "no more free controller ids for secondary controllers"); + return -1; + } } if (!subsys->serial) { @@ -48,7 +99,13 @@ int nvme_subsys_register_ctrl(NvmeCtrl *n, Error **errp) void nvme_subsys_unregister_ctrl(NvmeSubsystem *subsys, NvmeCtrl *n) { - subsys->ctrls[n->cntlid] = NULL; + if (pci_is_vf(&n->parent_obj)) { + subsys->ctrls[n->cntlid] = SUBSYS_SLOT_RSVD; + } else { + subsys->ctrls[n->cntlid] = NULL; + nvme_subsys_unreserve_cntlids(n); + } + n->cntlid = -1; } diff --git a/hw/nvme/trace-events b/hw/nvme/trace-events index 1834b17cf214..889bbb3101e4 100644 --- a/hw/nvme/trace-events +++ b/hw/nvme/trace-events @@ -57,6 +57,7 @@ pci_nvme_identify_ctrl_csi(uint8_t csi) "identify controller, csi=0x%"PRIx8"" pci_nvme_identify_ns(uint32_t ns) "nsid %"PRIu32"" pci_nvme_identify_ctrl_list(uint8_t cns, uint16_t cntid) "cns 0x%"PRIx8" cntid %"PRIu16"" pci_nvme_identify_pri_ctrl_cap(uint16_t cntlid) "identify primary controller capabilities cntlid=%"PRIu16"" +pci_nvme_identify_sec_ctrl_list(uint16_t cntlid, uint8_t numcntl) "identify secondary controller list cntlid=%"PRIu16" numcntl=%"PRIu8"" pci_nvme_identify_ns_csi(uint32_t ns, uint8_t csi) "nsid=%"PRIu32", csi=0x%"PRIx8"" pci_nvme_identify_nslist(uint32_t ns) "nsid %"PRIu32"" pci_nvme_identify_nslist_csi(uint16_t ns, uint8_t csi) "nsid=%"PRIu16", csi=0x%"PRIx8"" diff --git a/include/block/nvme.h b/include/block/nvme.h index 524a04fb94ec..94efd32578cb 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -1034,6 +1034,7 @@ enum NvmeIdCns { NVME_ID_CNS_NS_ATTACHED_CTRL_LIST = 0x12, NVME_ID_CNS_CTRL_LIST = 0x13, NVME_ID_CNS_PRIMARY_CTRL_CAP = 0x14, + NVME_ID_CNS_SECONDARY_CTRL_LIST = 0x15, NVME_ID_CNS_CS_NS_PRESENT_LIST = 0x1a, NVME_ID_CNS_CS_NS_PRESENT = 0x1b, NVME_ID_CNS_IO_COMMAND_SET = 0x1c, @@ -1575,6 +1576,23 @@ typedef struct QEMU_PACKED NvmePriCtrlCap { uint8_t rsvd80[4016]; } NvmePriCtrlCap; +typedef struct QEMU_PACKED NvmeSecCtrlEntry { + uint16_t scid; + uint16_t pcid; + uint8_t scs; + uint8_t rsvd5[3]; + uint16_t vfn; + uint16_t nvq; + uint16_t nvi; + uint8_t rsvd14[18]; +} NvmeSecCtrlEntry; + +typedef struct QEMU_PACKED NvmeSecCtrlList { + uint8_t numcntl; + uint8_t rsvd1[31]; + NvmeSecCtrlEntry sec[127]; +} NvmeSecCtrlList; + static inline void _nvme_check_size(void) { QEMU_BUILD_BUG_ON(sizeof(NvmeBar) != 4096); @@ -1611,5 +1629,7 @@ static inline void _nvme_check_size(void) QEMU_BUILD_BUG_ON(sizeof(NvmeZoneDescr) != 64); QEMU_BUILD_BUG_ON(sizeof(NvmeDifTuple) != 16); QEMU_BUILD_BUG_ON(sizeof(NvmePriCtrlCap) != 4096); + QEMU_BUILD_BUG_ON(sizeof(NvmeSecCtrlEntry) != 32); + QEMU_BUILD_BUG_ON(sizeof(NvmeSecCtrlList) != 4096); } #endif From patchwork Thu Jun 23 21:34:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 12893216 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D41A8C433EF for ; Thu, 23 Jun 2022 21:47:43 +0000 (UTC) Received: from localhost ([::1]:50512 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1o4Ufu-00040N-RR for qemu-devel@archiver.kernel.org; Thu, 23 Jun 2022 17:47:42 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54960) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UTp-0004R7-SE; Thu, 23 Jun 2022 17:35:13 -0400 Received: from wout5-smtp.messagingengine.com ([64.147.123.21]:47733) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UTl-0006cz-6r; Thu, 23 Jun 2022 17:35:13 -0400 Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.west.internal (Postfix) with ESMTP id F1C0D3200978; Thu, 23 Jun 2022 17:35:05 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute5.internal (MEProxy); Thu, 23 Jun 2022 17:35:07 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=cc:cc:content-transfer-encoding:content-type:date:date:from :from:in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm1; t=1656020105; x= 1656106505; bh=3h62N1IrLjIp0ND2Kjg+L7gZ21qshRstG5g1XZMVC4Q=; b=k kr+IUBGeSnEOdPi3vMHciHqdlFR2C3vxGKDvr5+6qOjLB39TXwuOFClu7aO4CCVf yVjmQGFj45y7742VcF35YAhnsRyJ1G+45BVQCKmFp9Tr8NcXfPjXAcrmXmH4aVcu XooGavDeryajQgSZ7hG7eheaRu4OJyeVU6SC4ikCMd4jkGpQAecranzhrwbapFem 4Kw0ADeio642OZqqRQYxPz6HQ3//RyOZQ/aDvno/NG4YlkVluLfJm17MJ/H77Hps MsNdBY2gor/McGfPKcLm/rGIJjc58hgWyjQof6kQqinpfmGPG+Khcg4QO7qm2XM6 T9P36jlwsWqbAsbMK09gQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1656020105; x= 1656106505; bh=3h62N1IrLjIp0ND2Kjg+L7gZ21qshRstG5g1XZMVC4Q=; b=J cawU9HhkIQRFe2qRCxjS+hDEo+x5Q/Twm4F/G+ckqd1FJY3YsQ1jSYqcSoaNaVxI LW4CfzlyFTQ0CT5CMs3cplBSIm/AiOahEIqde06nogy0abAD81BJcBXidttBlBCF lfX47M1wjfGV0EtypMIVF6nYi7cUBOHruIjonqtvDnW/HL9+GiNGmPbxkPGwSTWO u1MYQBkan2PWVxAV1rn4oJb4GhHnQd50TjhGBZ121LC4EMG/dtZjttaXPch1hJfU wsiGH9ZDPa1eeTOTAG5p+2hvdBjsiABaHRIWfbWsr4kwsnbFSoYBANP0tsZBqXi5 bviRuBqErBr7wJOYSmMog== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudefjedgudeifecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfgggtgfesthekredtredtjeenucfhrhhomhepmfhl rghushculfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrf grthhtvghrnhepfeevtdeuteeuudffvefggfdtfedtueelfffhieegffekgeefjeefffet jeeihfdvnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomh epihhtshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: Feedback-ID: idc91472f:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 23 Jun 2022 17:35:03 -0400 (EDT) From: Klaus Jensen To: Peter Maydell , qemu-devel@nongnu.org Cc: Stefan Hajnoczi , Igor Mammedov , Ani Sinha , Hanna Reitz , Kevin Wolf , "Michael S. Tsirkin" , Klaus Jensen , qemu-block@nongnu.org, Keith Busch , Fam Zheng , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Marcel Apfelbaum , =?utf-8?q?=C5=81ukasz_Gieryk?= , Klaus Jensen Subject: [PULL 04/15] hw/nvme: Implement the Function Level Reset Date: Thu, 23 Jun 2022 23:34:31 +0200 Message-Id: <20220623213442.67789-5-its@irrelevant.dk> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220623213442.67789-1-its@irrelevant.dk> References: <20220623213442.67789-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.21; envelope-from=its@irrelevant.dk; helo=wout5-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Łukasz Gieryk This patch implements the Function Level Reset, a feature currently not implemented for the Nvme device, while listed as a mandatory ("shall") in the 1.4 spec. The implementation reuses FLR-related building blocks defined for the pci-bridge module, and follows the same logic: - FLR capability is advertised in the PCIE config, - custom pci_write_config callback detects a write to the trigger register and performs the PCI reset, - which, eventually, calls the custom dc->reset handler. Depending on reset type, parts of the state should (or should not) be cleared. To distinguish the type of reset, an additional parameter is passed to the reset function. This patch also enables advertisement of the Power Management PCI capability. The main reason behind it is to announce the no_soft_reset=1 bit, to signal SR-IOV support where each VF can be reset individually. The implementation purposedly ignores writes to the PMCS.PS register, as even such naïve behavior is enough to correctly handle the D3->D0 transition. It’s worth to note, that the power state transition back to to D3, with all the corresponding side effects, wasn't and stil isn't handled properly. Signed-off-by: Łukasz Gieryk Reviewed-by: Klaus Jensen Acked-by: Michael S. Tsirkin Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 52 ++++++++++++++++++++++++++++++++++++++++---- hw/nvme/nvme.h | 5 +++++ hw/nvme/trace-events | 1 + 3 files changed, 54 insertions(+), 4 deletions(-) diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index b031212758cb..5ae80f114003 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -5903,7 +5903,7 @@ static void nvme_process_sq(void *opaque) } } -static void nvme_ctrl_reset(NvmeCtrl *n) +static void nvme_ctrl_reset(NvmeCtrl *n, NvmeResetType rst) { NvmeNamespace *ns; int i; @@ -5935,7 +5935,9 @@ static void nvme_ctrl_reset(NvmeCtrl *n) } if (!pci_is_vf(&n->parent_obj) && n->params.sriov_max_vfs) { - pcie_sriov_pf_disable_vfs(&n->parent_obj); + if (rst != NVME_RESET_CONTROLLER) { + pcie_sriov_pf_disable_vfs(&n->parent_obj); + } } n->aer_queued = 0; @@ -6169,7 +6171,7 @@ static void nvme_write_bar(NvmeCtrl *n, hwaddr offset, uint64_t data, } } else if (!NVME_CC_EN(data) && NVME_CC_EN(cc)) { trace_pci_nvme_mmio_stopped(); - nvme_ctrl_reset(n); + nvme_ctrl_reset(n, NVME_RESET_CONTROLLER); cc = 0; csts &= ~NVME_CSTS_READY; } @@ -6727,6 +6729,28 @@ static void nvme_init_sriov(NvmeCtrl *n, PCIDevice *pci_dev, uint16_t offset, PCI_BASE_ADDRESS_MEM_TYPE_64, bar_size); } +static int nvme_add_pm_capability(PCIDevice *pci_dev, uint8_t offset) +{ + Error *err = NULL; + int ret; + + ret = pci_add_capability(pci_dev, PCI_CAP_ID_PM, offset, + PCI_PM_SIZEOF, &err); + if (err) { + error_report_err(err); + return ret; + } + + pci_set_word(pci_dev->config + offset + PCI_PM_PMC, + PCI_PM_CAP_VER_1_2); + pci_set_word(pci_dev->config + offset + PCI_PM_CTRL, + PCI_PM_CTRL_NO_SOFT_RESET); + pci_set_word(pci_dev->wmask + offset + PCI_PM_CTRL, + PCI_PM_CTRL_STATE_MASK); + + return 0; +} + static int nvme_init_pci(NvmeCtrl *n, PCIDevice *pci_dev, Error **errp) { uint8_t *pci_conf = pci_dev->config; @@ -6748,7 +6772,9 @@ static int nvme_init_pci(NvmeCtrl *n, PCIDevice *pci_dev, Error **errp) } pci_config_set_class(pci_conf, PCI_CLASS_STORAGE_EXPRESS); + nvme_add_pm_capability(pci_dev, 0x60); pcie_endpoint_cap_init(pci_dev, 0x80); + pcie_cap_flr_init(pci_dev); if (n->params.sriov_max_vfs) { pcie_ari_init(pci_dev, 0x100, 1); } @@ -6999,7 +7025,7 @@ static void nvme_exit(PCIDevice *pci_dev) NvmeNamespace *ns; int i; - nvme_ctrl_reset(n); + nvme_ctrl_reset(n, NVME_RESET_FUNCTION); if (n->subsys) { for (i = 1; i <= NVME_MAX_NAMESPACES; i++) { @@ -7098,6 +7124,22 @@ static void nvme_set_smart_warning(Object *obj, Visitor *v, const char *name, } } +static void nvme_pci_reset(DeviceState *qdev) +{ + PCIDevice *pci_dev = PCI_DEVICE(qdev); + NvmeCtrl *n = NVME(pci_dev); + + trace_pci_nvme_pci_reset(); + nvme_ctrl_reset(n, NVME_RESET_FUNCTION); +} + +static void nvme_pci_write_config(PCIDevice *dev, uint32_t address, + uint32_t val, int len) +{ + pci_default_write_config(dev, address, val, len); + pcie_cap_flr_write_config(dev, address, val, len); +} + static const VMStateDescription nvme_vmstate = { .name = "nvme", .unmigratable = 1, @@ -7109,6 +7151,7 @@ static void nvme_class_init(ObjectClass *oc, void *data) PCIDeviceClass *pc = PCI_DEVICE_CLASS(oc); pc->realize = nvme_realize; + pc->config_write = nvme_pci_write_config; pc->exit = nvme_exit; pc->class_id = PCI_CLASS_STORAGE_EXPRESS; pc->revision = 2; @@ -7117,6 +7160,7 @@ static void nvme_class_init(ObjectClass *oc, void *data) dc->desc = "Non-Volatile Memory Express"; device_class_set_props(dc, nvme_props); dc->vmsd = &nvme_vmstate; + dc->reset = nvme_pci_reset; } static void nvme_instance_init(Object *obj) diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h index b66421cdf9e8..7b317d3dc469 100644 --- a/hw/nvme/nvme.h +++ b/hw/nvme/nvme.h @@ -488,6 +488,11 @@ typedef struct NvmeCtrl { NvmeSecCtrlList sec_ctrl_list; } NvmeCtrl; +typedef enum NvmeResetType { + NVME_RESET_FUNCTION = 0, + NVME_RESET_CONTROLLER = 1, +} NvmeResetType; + static inline NvmeNamespace *nvme_ns(NvmeCtrl *n, uint32_t nsid) { if (!nsid || nsid > NVME_MAX_NAMESPACES) { diff --git a/hw/nvme/trace-events b/hw/nvme/trace-events index 889bbb3101e4..b07864c57322 100644 --- a/hw/nvme/trace-events +++ b/hw/nvme/trace-events @@ -110,6 +110,7 @@ pci_nvme_zd_extension_set(uint32_t zone_idx) "set descriptor extension for zone_ pci_nvme_clear_ns_close(uint32_t state, uint64_t slba) "zone state=%"PRIu32", slba=%"PRIu64" transitioned to Closed state" pci_nvme_clear_ns_reset(uint32_t state, uint64_t slba) "zone state=%"PRIu32", slba=%"PRIu64" transitioned to Empty state" pci_nvme_zoned_zrwa_implicit_flush(uint64_t zslba, uint32_t nlb) "zslba 0x%"PRIx64" nlb %"PRIu32"" +pci_nvme_pci_reset(void) "PCI Function Level Reset" # error conditions pci_nvme_err_mdts(size_t len) "len %zu" From patchwork Thu Jun 23 21:34:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 12893213 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C1259C43334 for ; Thu, 23 Jun 2022 21:43:01 +0000 (UTC) Received: from localhost ([::1]:40276 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1o4UbM-0005S3-Iz for qemu-devel@archiver.kernel.org; Thu, 23 Jun 2022 17:43:00 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54994) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UTr-0004UF-Pw; Thu, 23 Jun 2022 17:35:17 -0400 Received: from wout5-smtp.messagingengine.com ([64.147.123.21]:37471) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UTp-0006df-RN; Thu, 23 Jun 2022 17:35:15 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.west.internal (Postfix) with ESMTP id EDBE43200974; Thu, 23 Jun 2022 17:35:10 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Thu, 23 Jun 2022 17:35:12 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=cc:cc:content-transfer-encoding:content-type:date:date:from :from:in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm1; t=1656020110; x= 1656106510; bh=oUsR2Oq1gDWwLmgxFMFxwSUd8p7vpsc4CW1xrXDQ7yE=; b=f DQBAlgs6wnQZMIT337O7aTnRdzVK5EigJ+uDShZ+sAAqkLTk2lx8WL09rlyb6GXU HuQBFPrd7Ih4Xl1h8qOXNzcU+t3WpQc94ytiL/Kn45OW8qbxXVZGHSTRRdFwMv1d qVHdNk6r8IZhOtAVySPtHiFUUcpQ/4wBhNKVyfCus5zvVcyz7CwmsmJHBgLuo6au uH4IV4bg+s8deuX2HuRJRiEpK+8n1WiaV9YV2/krpo/IsClXRSCButjhl00mqy9i cjriiPhW3/xypzdCFVNVn2q94AWPH/722AeyTsqBHfHkD0cFzbVnEUYbK1hkFwxg 0KfRhjgDMTlnTKGB+nv5w== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1656020110; x= 1656106510; bh=oUsR2Oq1gDWwLmgxFMFxwSUd8p7vpsc4CW1xrXDQ7yE=; b=c Lx3RHbfnVFjX9FfZq4K5l54QhyRSXULAqV+4xMNUSyS3zgg8tseUumtAor/n5/bV gyMhlBSHe9fp2jDNdsN6AlsxLIKS6zWrlG9FWv0uYWqacJLbHDAHlSahAWkcQMOu xr4GQFmpB9Q+aZLlV/Ko6XbUJcAUxwkvcvhlzqpdCR1x4RqCo9TPNIlG3CbmI13S UonCK+yBT/FpnQ+7eBiBn+YFTR1e9Iaz+7Dihaq9IBIzrNqxRmRw5OeogjimJkY9 GlweF+DDcDtDY4gmtXyyVDUVlBKKYFkdcT1USIkgyhNB0LEx/x5mbxkN1NwdB4KK sihtDkA+OGkTwKxMSy3Vw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudefjedgudeifecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfgggtgfesthekredtredtjeenucfhrhhomhepmfhl rghushculfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrf grthhtvghrnhepfeevtdeuteeuudffvefggfdtfedtueelfffhieegffekgeefjeefffet jeeihfdvnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomh epihhtshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: Feedback-ID: idc91472f:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 23 Jun 2022 17:35:08 -0400 (EDT) From: Klaus Jensen To: Peter Maydell , qemu-devel@nongnu.org Cc: Stefan Hajnoczi , Igor Mammedov , Ani Sinha , Hanna Reitz , Kevin Wolf , "Michael S. Tsirkin" , Klaus Jensen , qemu-block@nongnu.org, Keith Busch , Fam Zheng , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Marcel Apfelbaum , =?utf-8?q?=C5=81ukasz_Gieryk?= , Klaus Jensen Subject: [PULL 05/15] hw/nvme: Make max_ioqpairs and msix_qsize configurable in runtime Date: Thu, 23 Jun 2022 23:34:32 +0200 Message-Id: <20220623213442.67789-6-its@irrelevant.dk> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220623213442.67789-1-its@irrelevant.dk> References: <20220623213442.67789-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.21; envelope-from=its@irrelevant.dk; helo=wout5-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Łukasz Gieryk The NVMe device defines two properties: max_ioqpairs, msix_qsize. Having them as constants is problematic for SR-IOV support. SR-IOV introduces virtual resources (queues, interrupts) that can be assigned to PF and its dependent VFs. Each device, following a reset, should work with the configured number of queues. A single constant is no longer sufficient to hold the whole state. This patch tries to solve the problem by introducing additional variables in NvmeCtrl’s state. The variables for, e.g., managing queues are therefore organized as: - n->params.max_ioqpairs – no changes, constant set by the user - n->(mutable_state) – (not a part of this patch) user-configurable, specifies number of queues available _after_ reset - n->conf_ioqpairs - (new) used in all the places instead of the ‘old’ n->params.max_ioqpairs; initialized in realize() and updated during reset() to reflect user’s changes to the mutable state Since the number of available i/o queues and interrupts can change in runtime, buffers for sq/cqs and the MSIX-related structures are allocated big enough to handle the limits, to completely avoid the complicated reallocation. A helper function (nvme_update_msixcap_ts) updates the corresponding capability register, to signal configuration changes. Signed-off-by: Łukasz Gieryk Reviewed-by: Klaus Jensen Acked-by: Michael S. Tsirkin Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 52 ++++++++++++++++++++++++++++++++++---------------- hw/nvme/nvme.h | 2 ++ 2 files changed, 38 insertions(+), 16 deletions(-) diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index 5ae80f114003..e970234a2c87 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -448,12 +448,12 @@ static bool nvme_nsid_valid(NvmeCtrl *n, uint32_t nsid) static int nvme_check_sqid(NvmeCtrl *n, uint16_t sqid) { - return sqid < n->params.max_ioqpairs + 1 && n->sq[sqid] != NULL ? 0 : -1; + return sqid < n->conf_ioqpairs + 1 && n->sq[sqid] != NULL ? 0 : -1; } static int nvme_check_cqid(NvmeCtrl *n, uint16_t cqid) { - return cqid < n->params.max_ioqpairs + 1 && n->cq[cqid] != NULL ? 0 : -1; + return cqid < n->conf_ioqpairs + 1 && n->cq[cqid] != NULL ? 0 : -1; } static void nvme_inc_cq_tail(NvmeCQueue *cq) @@ -4295,8 +4295,7 @@ static uint16_t nvme_create_sq(NvmeCtrl *n, NvmeRequest *req) trace_pci_nvme_err_invalid_create_sq_cqid(cqid); return NVME_INVALID_CQID | NVME_DNR; } - if (unlikely(!sqid || sqid > n->params.max_ioqpairs || - n->sq[sqid] != NULL)) { + if (unlikely(!sqid || sqid > n->conf_ioqpairs || n->sq[sqid] != NULL)) { trace_pci_nvme_err_invalid_create_sq_sqid(sqid); return NVME_INVALID_QID | NVME_DNR; } @@ -4648,8 +4647,7 @@ static uint16_t nvme_create_cq(NvmeCtrl *n, NvmeRequest *req) trace_pci_nvme_create_cq(prp1, cqid, vector, qsize, qflags, NVME_CQ_FLAGS_IEN(qflags) != 0); - if (unlikely(!cqid || cqid > n->params.max_ioqpairs || - n->cq[cqid] != NULL)) { + if (unlikely(!cqid || cqid > n->conf_ioqpairs || n->cq[cqid] != NULL)) { trace_pci_nvme_err_invalid_create_cq_cqid(cqid); return NVME_INVALID_QID | NVME_DNR; } @@ -4665,7 +4663,7 @@ static uint16_t nvme_create_cq(NvmeCtrl *n, NvmeRequest *req) trace_pci_nvme_err_invalid_create_cq_vector(vector); return NVME_INVALID_IRQ_VECTOR | NVME_DNR; } - if (unlikely(vector >= n->params.msix_qsize)) { + if (unlikely(vector >= n->conf_msix_qsize)) { trace_pci_nvme_err_invalid_create_cq_vector(vector); return NVME_INVALID_IRQ_VECTOR | NVME_DNR; } @@ -5263,13 +5261,12 @@ defaults: break; case NVME_NUMBER_OF_QUEUES: - result = (n->params.max_ioqpairs - 1) | - ((n->params.max_ioqpairs - 1) << 16); + result = (n->conf_ioqpairs - 1) | ((n->conf_ioqpairs - 1) << 16); trace_pci_nvme_getfeat_numq(result); break; case NVME_INTERRUPT_VECTOR_CONF: iv = dw11 & 0xffff; - if (iv >= n->params.max_ioqpairs + 1) { + if (iv >= n->conf_ioqpairs + 1) { return NVME_INVALID_FIELD | NVME_DNR; } @@ -5425,10 +5422,10 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest *req) trace_pci_nvme_setfeat_numq((dw11 & 0xffff) + 1, ((dw11 >> 16) & 0xffff) + 1, - n->params.max_ioqpairs, - n->params.max_ioqpairs); - req->cqe.result = cpu_to_le32((n->params.max_ioqpairs - 1) | - ((n->params.max_ioqpairs - 1) << 16)); + n->conf_ioqpairs, + n->conf_ioqpairs); + req->cqe.result = cpu_to_le32((n->conf_ioqpairs - 1) | + ((n->conf_ioqpairs - 1) << 16)); break; case NVME_ASYNCHRONOUS_EVENT_CONF: n->features.async_config = dw11; @@ -5903,8 +5900,24 @@ static void nvme_process_sq(void *opaque) } } +static void nvme_update_msixcap_ts(PCIDevice *pci_dev, uint32_t table_size) +{ + uint8_t *config; + + if (!msix_present(pci_dev)) { + return; + } + + assert(table_size > 0 && table_size <= pci_dev->msix_entries_nr); + + config = pci_dev->config + pci_dev->msix_cap; + pci_set_word_by_mask(config + PCI_MSIX_FLAGS, PCI_MSIX_FLAGS_QSIZE, + table_size - 1); +} + static void nvme_ctrl_reset(NvmeCtrl *n, NvmeResetType rst) { + PCIDevice *pci_dev = &n->parent_obj; NvmeNamespace *ns; int i; @@ -5934,15 +5947,17 @@ static void nvme_ctrl_reset(NvmeCtrl *n, NvmeResetType rst) g_free(event); } - if (!pci_is_vf(&n->parent_obj) && n->params.sriov_max_vfs) { + if (!pci_is_vf(pci_dev) && n->params.sriov_max_vfs) { if (rst != NVME_RESET_CONTROLLER) { - pcie_sriov_pf_disable_vfs(&n->parent_obj); + pcie_sriov_pf_disable_vfs(pci_dev); } } n->aer_queued = 0; n->outstanding_aers = 0; n->qs_created = false; + + nvme_update_msixcap_ts(pci_dev, n->conf_msix_qsize); } static void nvme_ctrl_shutdown(NvmeCtrl *n) @@ -6653,6 +6668,9 @@ static void nvme_init_state(NvmeCtrl *n) NvmeSecCtrlEntry *sctrl; int i; + n->conf_ioqpairs = n->params.max_ioqpairs; + n->conf_msix_qsize = n->params.msix_qsize; + /* add one to max_ioqpairs to account for the admin queue pair */ n->reg_size = pow2ceil(sizeof(NvmeBar) + 2 * (n->params.max_ioqpairs + 1) * NVME_DB_SIZE); @@ -6814,6 +6832,8 @@ static int nvme_init_pci(NvmeCtrl *n, PCIDevice *pci_dev, Error **errp) } } + nvme_update_msixcap_ts(pci_dev, n->conf_msix_qsize); + if (n->params.cmb_size_mb) { nvme_init_cmb(n, pci_dev); } diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h index 7b317d3dc469..aab4962fb857 100644 --- a/hw/nvme/nvme.h +++ b/hw/nvme/nvme.h @@ -439,6 +439,8 @@ typedef struct NvmeCtrl { uint64_t starttime_ms; uint16_t temperature; uint8_t smart_critical_warning; + uint32_t conf_msix_qsize; + uint32_t conf_ioqpairs; struct { MemoryRegion mem; From patchwork Thu Jun 23 21:34:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 12893219 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8394BC43334 for ; Thu, 23 Jun 2022 21:50:30 +0000 (UTC) Received: from localhost ([::1]:57620 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1o4Uib-0000Mg-Kj for qemu-devel@archiver.kernel.org; Thu, 23 Jun 2022 17:50:29 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55066) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UTw-0004dS-P5; Thu, 23 Jun 2022 17:35:20 -0400 Received: from wout5-smtp.messagingengine.com ([64.147.123.21]:43511) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UTu-0006e8-I1; Thu, 23 Jun 2022 17:35:20 -0400 Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.west.internal (Postfix) with ESMTP id 9AC3E320097C; Thu, 23 Jun 2022 17:35:15 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute5.internal (MEProxy); Thu, 23 Jun 2022 17:35:17 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=cc:cc:content-transfer-encoding:content-type:date:date:from :from:in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm1; t=1656020115; x= 1656106515; bh=ezkYm9EDGpTe/unPxklpx6mjGWwbgQ86K8FY51Uf7e8=; b=h fwdO3nEB7PLxQXHDKwEGbYnN3/bldzwKX08OEXOXQEsbMQKbJAcpPJwJHxBh4wrY 1qp38NvxPtT3MyQeV1oPZ41zntsrixMFJ/wNqwMc68XFeAfnfjtycns7PhO0//a3 5gwAzhwG2ZJfOfDE7nFNPRZ++JpYKSfwzpWDXux00TZdi/TxqaQwMV3cpP14yXE2 pz/1UJPGT1zwz1CvVjfbEAjlJv18Kc6Erj4mc3KgV3SMvBfTbSHcoeMassXCcmEo wgtrR7isHfVlSMw2PGPRMr4RpV8R1kq+15zqhXZ2ZgwxiXdqV2q6mLT9uc/802OK j6g1NTbKn3Smblq64w6Uw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1656020115; x= 1656106515; bh=ezkYm9EDGpTe/unPxklpx6mjGWwbgQ86K8FY51Uf7e8=; b=L kaQ4OjifazR0tRnhhVCvx7BmgCPTwJV6ctbvKgqzTc3dzcFwSe7l9I4pA1lroDko xf+tiAm2+JsLDWHP4Qahm8a4lsEXeVUPjvTikaToVj6UKA4ggs26mHI4aZlUp6Kb u7Nz1ouLDwoa5XwOGpJ2zH2K6eo9HHmoEvy6gn5bnRdtOzzXZ2XZN8zbW/uukERa hkemgugtwjkbE8rjSBjYSNU2nE7lc8Ws6DwfY69IlKZLUIdWp5DCZq2sXLvyXSbd YuLhXCIVYQrd0rIMMm/ebkAHGXBbmKHDgVb4vfnWz775Ek6aAqraicvsYCR/Yk/W pTYs4cpLk1csdyvIElTTQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudefjedgudeifecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfgggtgfesthekredtredtjeenucfhrhhomhepmfhl rghushculfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrf grthhtvghrnhepfeevtdeuteeuudffvefggfdtfedtueelfffhieegffekgeefjeefffet jeeihfdvnecuvehluhhsthgvrhfuihiivgepudenucfrrghrrghmpehmrghilhhfrhhomh epihhtshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: Feedback-ID: idc91472f:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 23 Jun 2022 17:35:12 -0400 (EDT) From: Klaus Jensen To: Peter Maydell , qemu-devel@nongnu.org Cc: Stefan Hajnoczi , Igor Mammedov , Ani Sinha , Hanna Reitz , Kevin Wolf , "Michael S. Tsirkin" , Klaus Jensen , qemu-block@nongnu.org, Keith Busch , Fam Zheng , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Marcel Apfelbaum , =?utf-8?q?=C5=81ukasz_Gieryk?= , Klaus Jensen Subject: [PULL 06/15] hw/nvme: Remove reg_size variable and update BAR0 size calculation Date: Thu, 23 Jun 2022 23:34:33 +0200 Message-Id: <20220623213442.67789-7-its@irrelevant.dk> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220623213442.67789-1-its@irrelevant.dk> References: <20220623213442.67789-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.21; envelope-from=its@irrelevant.dk; helo=wout5-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Łukasz Gieryk The n->reg_size parameter unnecessarily splits the BAR0 size calculation in two phases; removed to simplify the code. With all the calculations done in one place, it seems the pow2ceil, applied originally to reg_size, is unnecessary. The rounding should happen as the last step, when BAR size includes Nvme registers, queue registers, and MSIX-related space. Finally, the size of the mmio memory region is extended to cover the 1st 4KiB padding (see the map below). Access to this range is handled as interaction with a non-existing queue and generates an error trace, so actually nothing changes, while the reg_size variable is no longer needed. -------------------- | BAR0 | -------------------- [Nvme Registers ] [Queues ] [power-of-2 padding] - removed in this patch [4KiB padding (1) ] [MSIX TABLE ] [4KiB padding (2) ] [MSIX PBA ] [power-of-2 padding] Signed-off-by: Łukasz Gieryk Reviewed-by: Klaus Jensen Acked-by: Michael S. Tsirkin Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 10 +++++----- hw/nvme/nvme.h | 1 - 2 files changed, 5 insertions(+), 6 deletions(-) diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index e970234a2c87..9f07a730d341 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -6671,9 +6671,6 @@ static void nvme_init_state(NvmeCtrl *n) n->conf_ioqpairs = n->params.max_ioqpairs; n->conf_msix_qsize = n->params.msix_qsize; - /* add one to max_ioqpairs to account for the admin queue pair */ - n->reg_size = pow2ceil(sizeof(NvmeBar) + - 2 * (n->params.max_ioqpairs + 1) * NVME_DB_SIZE); n->sq = g_new0(NvmeSQueue *, n->params.max_ioqpairs + 1); n->cq = g_new0(NvmeCQueue *, n->params.max_ioqpairs + 1); n->temperature = NVME_TEMPERATURE; @@ -6797,7 +6794,10 @@ static int nvme_init_pci(NvmeCtrl *n, PCIDevice *pci_dev, Error **errp) pcie_ari_init(pci_dev, 0x100, 1); } - bar_size = QEMU_ALIGN_UP(n->reg_size, 4 * KiB); + /* add one to max_ioqpairs to account for the admin queue pair */ + bar_size = sizeof(NvmeBar) + + 2 * (n->params.max_ioqpairs + 1) * NVME_DB_SIZE; + bar_size = QEMU_ALIGN_UP(bar_size, 4 * KiB); msix_table_offset = bar_size; msix_table_size = PCI_MSIX_ENTRY_SIZE * n->params.msix_qsize; @@ -6811,7 +6811,7 @@ static int nvme_init_pci(NvmeCtrl *n, PCIDevice *pci_dev, Error **errp) memory_region_init(&n->bar0, OBJECT(n), "nvme-bar0", bar_size); memory_region_init_io(&n->iomem, OBJECT(n), &nvme_mmio_ops, n, "nvme", - n->reg_size); + msix_table_offset); memory_region_add_subregion(&n->bar0, 0, &n->iomem); if (pci_is_vf(pci_dev)) { diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h index aab4962fb857..d9deb0b1ec43 100644 --- a/hw/nvme/nvme.h +++ b/hw/nvme/nvme.h @@ -429,7 +429,6 @@ typedef struct NvmeCtrl { uint16_t max_prp_ents; uint16_t cqe_size; uint16_t sqe_size; - uint32_t reg_size; uint32_t max_q_ents; uint8_t outstanding_aers; uint32_t irq_status; From patchwork Thu Jun 23 21:34:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 12893214 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D45CBC43334 for ; Thu, 23 Jun 2022 21:43:39 +0000 (UTC) Received: from localhost ([::1]:41854 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1o4Ubx-0006XM-Vj for qemu-devel@archiver.kernel.org; Thu, 23 Jun 2022 17:43:38 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55100) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UU1-0004k3-4x; Thu, 23 Jun 2022 17:35:26 -0400 Received: from wout5-smtp.messagingengine.com ([64.147.123.21]:55627) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UTz-0006fK-Du; Thu, 23 Jun 2022 17:35:24 -0400 Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.west.internal (Postfix) with ESMTP id 8FB883200959; Thu, 23 Jun 2022 17:35:20 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute1.internal (MEProxy); Thu, 23 Jun 2022 17:35:22 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=cc:cc:content-transfer-encoding:content-type:date:date:from :from:in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm1; t=1656020119; x= 1656106519; bh=SR+ZHQlbF+eJKjbFaAHZZQT7pYhR+T55DYI4EXrTytY=; b=S Y+7BqQgHm7BIGbP7eNylkK+d65eqVqaqjbPhihaNQkjXdMj3P0/ghTMfvR295j1e 0Sr9CG3ry5qcM2Ok+7QbW8Z2XbfAuZRc8Q/GEc4AghAeYNUWtq7g5tBOB3WwuF1i d6Akts5WQeTEntbbI7HdqC44cf+05/l+VDPKCjZYe3Q0eTYMNTBsn4rFgtzhuZgT vvREWJz1llKEjaTk0vve6k7RQZX6EnXe6kz0K/hAosCtO7iRLGSM1sAJGh1jBjI9 N3f61eLqafvmmay6rvQxFKtaC8FIqosTu4MKjsxW3Mi1+nNPems+lyU7CpKV0aro QDKZqwiFfdfxUHJt9YKmg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1656020119; x= 1656106519; bh=SR+ZHQlbF+eJKjbFaAHZZQT7pYhR+T55DYI4EXrTytY=; b=u terYj5QqtBa5P/U/74I3dU+DYjBtdBxPssQsFxfMPgS36cXrJUFBGEUF6LSNp7/n G+WjkJZAX87TeBS2I7dvINmkK/hEgFujS7L8rWfquvcTQPd3ci3MXDss6cQslQeC VbLHWlgBicOyunX6GtnBXO8vLL47LZj7tFa+MWY7cj+orlyceFzsqqARGRC9Mvgb W+HeBGA0pS+haZg/4wR78KfxBC9V+ujfcv084qvq8HEjG9p+Z6+3IlJCxsrOwrPv 52Cb2j7/twXI62avEMmhK/OWKN40FSaV4hFSTsJCY+kQ8EqkIsMSzfPCWHQPtsxa x3ggIh4vYslxYnEJF+45A== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudefjedgudeifecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfgggtgfesthekredtredtjeenucfhrhhomhepmfhl rghushculfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrf grthhtvghrnhepfeevtdeuteeuudffvefggfdtfedtueelfffhieegffekgeefjeefffet jeeihfdvnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomh epihhtshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: Feedback-ID: idc91472f:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 23 Jun 2022 17:35:17 -0400 (EDT) From: Klaus Jensen To: Peter Maydell , qemu-devel@nongnu.org Cc: Stefan Hajnoczi , Igor Mammedov , Ani Sinha , Hanna Reitz , Kevin Wolf , "Michael S. Tsirkin" , Klaus Jensen , qemu-block@nongnu.org, Keith Busch , Fam Zheng , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Marcel Apfelbaum , =?utf-8?q?=C5=81ukasz_Gieryk?= , Klaus Jensen Subject: [PULL 07/15] hw/nvme: Calculate BAR attributes in a function Date: Thu, 23 Jun 2022 23:34:34 +0200 Message-Id: <20220623213442.67789-8-its@irrelevant.dk> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220623213442.67789-1-its@irrelevant.dk> References: <20220623213442.67789-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.21; envelope-from=its@irrelevant.dk; helo=wout5-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Łukasz Gieryk An NVMe device with SR-IOV capability calculates the BAR size differently for PF and VF, so it makes sense to extract the common code to a separate function. Signed-off-by: Łukasz Gieryk Reviewed-by: Klaus Jensen Acked-by: Michael S. Tsirkin Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 45 +++++++++++++++++++++++++++++++-------------- 1 file changed, 31 insertions(+), 14 deletions(-) diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index 9f07a730d341..3315e5c3de0a 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -6730,6 +6730,34 @@ static void nvme_init_pmr(NvmeCtrl *n, PCIDevice *pci_dev) memory_region_set_enabled(&n->pmr.dev->mr, false); } +static uint64_t nvme_bar_size(unsigned total_queues, unsigned total_irqs, + unsigned *msix_table_offset, + unsigned *msix_pba_offset) +{ + uint64_t bar_size, msix_table_size, msix_pba_size; + + bar_size = sizeof(NvmeBar) + 2 * total_queues * NVME_DB_SIZE; + bar_size = QEMU_ALIGN_UP(bar_size, 4 * KiB); + + if (msix_table_offset) { + *msix_table_offset = bar_size; + } + + msix_table_size = PCI_MSIX_ENTRY_SIZE * total_irqs; + bar_size += msix_table_size; + bar_size = QEMU_ALIGN_UP(bar_size, 4 * KiB); + + if (msix_pba_offset) { + *msix_pba_offset = bar_size; + } + + msix_pba_size = QEMU_ALIGN_UP(total_irqs, 64) / 8; + bar_size += msix_pba_size; + + bar_size = pow2ceil(bar_size); + return bar_size; +} + static void nvme_init_sriov(NvmeCtrl *n, PCIDevice *pci_dev, uint16_t offset, uint64_t bar_size) { @@ -6769,7 +6797,7 @@ static int nvme_add_pm_capability(PCIDevice *pci_dev, uint8_t offset) static int nvme_init_pci(NvmeCtrl *n, PCIDevice *pci_dev, Error **errp) { uint8_t *pci_conf = pci_dev->config; - uint64_t bar_size, msix_table_size, msix_pba_size; + uint64_t bar_size; unsigned msix_table_offset, msix_pba_offset; int ret; @@ -6795,19 +6823,8 @@ static int nvme_init_pci(NvmeCtrl *n, PCIDevice *pci_dev, Error **errp) } /* add one to max_ioqpairs to account for the admin queue pair */ - bar_size = sizeof(NvmeBar) + - 2 * (n->params.max_ioqpairs + 1) * NVME_DB_SIZE; - bar_size = QEMU_ALIGN_UP(bar_size, 4 * KiB); - msix_table_offset = bar_size; - msix_table_size = PCI_MSIX_ENTRY_SIZE * n->params.msix_qsize; - - bar_size += msix_table_size; - bar_size = QEMU_ALIGN_UP(bar_size, 4 * KiB); - msix_pba_offset = bar_size; - msix_pba_size = QEMU_ALIGN_UP(n->params.msix_qsize, 64) / 8; - - bar_size += msix_pba_size; - bar_size = pow2ceil(bar_size); + bar_size = nvme_bar_size(n->params.max_ioqpairs + 1, n->params.msix_qsize, + &msix_table_offset, &msix_pba_offset); memory_region_init(&n->bar0, OBJECT(n), "nvme-bar0", bar_size); memory_region_init_io(&n->iomem, OBJECT(n), &nvme_mmio_ops, n, "nvme", From patchwork Thu Jun 23 21:34:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 12893217 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7CD23C43334 for ; Thu, 23 Jun 2022 21:48:25 +0000 (UTC) Received: from localhost ([::1]:52958 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1o4Uga-0005el-H1 for qemu-devel@archiver.kernel.org; Thu, 23 Jun 2022 17:48:24 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55122) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UU6-0004xJ-Ga; Thu, 23 Jun 2022 17:35:30 -0400 Received: from wout5-smtp.messagingengine.com ([64.147.123.21]:48953) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UU4-0006fd-3M; Thu, 23 Jun 2022 17:35:30 -0400 Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.west.internal (Postfix) with ESMTP id 3D070320097E; Thu, 23 Jun 2022 17:35:25 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute5.internal (MEProxy); Thu, 23 Jun 2022 17:35:26 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=cc:cc:content-transfer-encoding:content-type:date:date:from :from:in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm1; t=1656020124; x= 1656106524; bh=4mUg/HH0qTKh/3xOaecbdOTxrGfoQODZREtxdP5lhZo=; b=H nINxHDDecOIdfnUnd+fomtDCvoUQNDifgMmXOW4O04z6sxSiF3ne04lv/38jBCtp GgNOFPNdYSYVjJT6wq7kRrbFucfjq+dEWNo1X4+/5IQu+pYqcKtKLhs7NIfrUzUG QRDSQYus1U8SZUQyem7nXrbW/UiVTyqVjXajLG4UQc6Cq3zDysuwjZVTLdfEAwlC QE+oTQSq+049wARxjjyAVuczJ0F4X+fqGTXbFp2dMeGWZR5gY45YVzCVFQVuphO/ gtr2sgPWG+fjGs2rcai+ENWU+VP3moCUKJuKey1gc0J61T1LwhNhQX2nVPLU9Zcl SboJIZ++HxP4rCGDXwJTw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1656020124; x= 1656106524; bh=4mUg/HH0qTKh/3xOaecbdOTxrGfoQODZREtxdP5lhZo=; b=d 3gMX2Zv86bardHHKpiucyVmCr9VeoKscyO5uXMD0MNYdMnTLH1xFGe/JYGv+EQbS fFFEdkjUerDb7XAdJk+/G5IArrrmgO9d/44qQprA6LAQJtZ40o74RG5p5y4y+opF icIYC4o7MH2+xVrqKLkdo5O47c5UPx9B64OadX1NNTIh0WP2CrgWY4BAuMg7tnDD gHuCRO/i0bkc/outOOIUO1g7bQuhdP6d90UVaH9UPUrstQpg4fK+97uEt4P+yFdB e5UQtWmwrhko1ZrAnxLwu0EEDCu+RHXbJ1ZjmurcTmyXTXh+u0+TgBnvN51oe30r ORc2jjTU85RbUSmZG34kg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudefjedgudeifecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfgggtgfesthekredtredtjeenucfhrhhomhepmfhl rghushculfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrf grthhtvghrnhepfeevtdeuteeuudffvefggfdtfedtueelfffhieegffekgeefjeefffet jeeihfdvnecuvehluhhsthgvrhfuihiivgepvdenucfrrghrrghmpehmrghilhhfrhhomh epihhtshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: Feedback-ID: idc91472f:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 23 Jun 2022 17:35:22 -0400 (EDT) From: Klaus Jensen To: Peter Maydell , qemu-devel@nongnu.org Cc: Stefan Hajnoczi , Igor Mammedov , Ani Sinha , Hanna Reitz , Kevin Wolf , "Michael S. Tsirkin" , Klaus Jensen , qemu-block@nongnu.org, Keith Busch , Fam Zheng , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Marcel Apfelbaum , =?utf-8?q?=C5=81ukasz_Gieryk?= , Klaus Jensen Subject: [PULL 08/15] hw/nvme: Initialize capability structures for primary/secondary controllers Date: Thu, 23 Jun 2022 23:34:35 +0200 Message-Id: <20220623213442.67789-9-its@irrelevant.dk> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220623213442.67789-1-its@irrelevant.dk> References: <20220623213442.67789-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.21; envelope-from=its@irrelevant.dk; helo=wout5-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Łukasz Gieryk With four new properties: - sriov_v{i,q}_flexible, - sriov_max_v{i,q}_per_vf, one can configure the number of available flexible resources, as well as the limits. The primary and secondary controller capability structures are initialized accordingly. Since the number of available queues (interrupts) now varies between VF/PF, BAR size calculation is also adjusted. Signed-off-by: Łukasz Gieryk Acked-by: Michael S. Tsirkin Reviewed-by: Klaus Jensen Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 141 ++++++++++++++++++++++++++++++++++++++++--- hw/nvme/nvme.h | 4 ++ include/block/nvme.h | 5 ++ 3 files changed, 143 insertions(+), 7 deletions(-) diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index 3315e5c3de0a..3728813e90f0 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -36,6 +36,10 @@ * zoned.zasl=, \ * zoned.auto_transition=, \ * sriov_max_vfs= \ + * sriov_vq_flexible= \ + * sriov_vi_flexible= \ + * sriov_max_vi_per_vf= \ + * sriov_max_vq_per_vf= \ * subsys= * -device nvme-ns,drive=,bus=,nsid=,\ * zoned=, \ @@ -113,6 +117,29 @@ * enables reporting of both SR-IOV and ARI capabilities by the NVMe device. * Virtual function controllers will not report SR-IOV capability. * + * NOTE: Single Root I/O Virtualization support is experimental. + * All the related parameters may be subject to change. + * + * - `sriov_vq_flexible` + * Indicates the total number of flexible queue resources assignable to all + * the secondary controllers. Implicitly sets the number of primary + * controller's private resources to `(max_ioqpairs - sriov_vq_flexible)`. + * + * - `sriov_vi_flexible` + * Indicates the total number of flexible interrupt resources assignable to + * all the secondary controllers. Implicitly sets the number of primary + * controller's private resources to `(msix_qsize - sriov_vi_flexible)`. + * + * - `sriov_max_vi_per_vf` + * Indicates the maximum number of virtual interrupt resources assignable + * to a secondary controller. The default 0 resolves to + * `(sriov_vi_flexible / sriov_max_vfs)`. + * + * - `sriov_max_vq_per_vf` + * Indicates the maximum number of virtual queue resources assignable to + * a secondary controller. The default 0 resolves to + * `(sriov_vq_flexible / sriov_max_vfs)`. + * * nvme namespace device parameters * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * - `shared` @@ -185,6 +212,7 @@ #define NVME_NUM_FW_SLOTS 1 #define NVME_DEFAULT_MAX_ZA_SIZE (128 * KiB) #define NVME_MAX_VFS 127 +#define NVME_VF_RES_GRANULARITY 1 #define NVME_VF_OFFSET 0x1 #define NVME_VF_STRIDE 1 @@ -6658,6 +6686,53 @@ static void nvme_check_constraints(NvmeCtrl *n, Error **errp) error_setg(errp, "PMR is not supported with SR-IOV"); return; } + + if (!params->sriov_vq_flexible || !params->sriov_vi_flexible) { + error_setg(errp, "both sriov_vq_flexible and sriov_vi_flexible" + " must be set for the use of SR-IOV"); + return; + } + + if (params->sriov_vq_flexible < params->sriov_max_vfs * 2) { + error_setg(errp, "sriov_vq_flexible must be greater than or equal" + " to %d (sriov_max_vfs * 2)", params->sriov_max_vfs * 2); + return; + } + + if (params->max_ioqpairs < params->sriov_vq_flexible + 2) { + error_setg(errp, "(max_ioqpairs - sriov_vq_flexible) must be" + " greater than or equal to 2"); + return; + } + + if (params->sriov_vi_flexible < params->sriov_max_vfs) { + error_setg(errp, "sriov_vi_flexible must be greater than or equal" + " to %d (sriov_max_vfs)", params->sriov_max_vfs); + return; + } + + if (params->msix_qsize < params->sriov_vi_flexible + 1) { + error_setg(errp, "(msix_qsize - sriov_vi_flexible) must be" + " greater than or equal to 1"); + return; + } + + if (params->sriov_max_vi_per_vf && + (params->sriov_max_vi_per_vf - 1) % NVME_VF_RES_GRANULARITY) { + error_setg(errp, "sriov_max_vi_per_vf must meet:" + " (sriov_max_vi_per_vf - 1) %% %d == 0 and" + " sriov_max_vi_per_vf >= 1", NVME_VF_RES_GRANULARITY); + return; + } + + if (params->sriov_max_vq_per_vf && + (params->sriov_max_vq_per_vf < 2 || + (params->sriov_max_vq_per_vf - 1) % NVME_VF_RES_GRANULARITY)) { + error_setg(errp, "sriov_max_vq_per_vf must meet:" + " (sriov_max_vq_per_vf - 1) %% %d == 0 and" + " sriov_max_vq_per_vf >= 2", NVME_VF_RES_GRANULARITY); + return; + } } } @@ -6666,10 +6741,19 @@ static void nvme_init_state(NvmeCtrl *n) NvmePriCtrlCap *cap = &n->pri_ctrl_cap; NvmeSecCtrlList *list = &n->sec_ctrl_list; NvmeSecCtrlEntry *sctrl; + uint8_t max_vfs; int i; - n->conf_ioqpairs = n->params.max_ioqpairs; - n->conf_msix_qsize = n->params.msix_qsize; + if (pci_is_vf(&n->parent_obj)) { + sctrl = nvme_sctrl(n); + max_vfs = 0; + n->conf_ioqpairs = sctrl->nvq ? le16_to_cpu(sctrl->nvq) - 1 : 0; + n->conf_msix_qsize = sctrl->nvi ? le16_to_cpu(sctrl->nvi) : 1; + } else { + max_vfs = n->params.sriov_max_vfs; + n->conf_ioqpairs = n->params.max_ioqpairs; + n->conf_msix_qsize = n->params.msix_qsize; + } n->sq = g_new0(NvmeSQueue *, n->params.max_ioqpairs + 1); n->cq = g_new0(NvmeCQueue *, n->params.max_ioqpairs + 1); @@ -6678,14 +6762,41 @@ static void nvme_init_state(NvmeCtrl *n) n->starttime_ms = qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL); n->aer_reqs = g_new0(NvmeRequest *, n->params.aerl + 1); - list->numcntl = cpu_to_le16(n->params.sriov_max_vfs); - for (i = 0; i < n->params.sriov_max_vfs; i++) { + list->numcntl = cpu_to_le16(max_vfs); + for (i = 0; i < max_vfs; i++) { sctrl = &list->sec[i]; sctrl->pcid = cpu_to_le16(n->cntlid); sctrl->vfn = cpu_to_le16(i + 1); } cap->cntlid = cpu_to_le16(n->cntlid); + cap->crt = NVME_CRT_VQ | NVME_CRT_VI; + + if (pci_is_vf(&n->parent_obj)) { + cap->vqprt = cpu_to_le16(1 + n->conf_ioqpairs); + } else { + cap->vqprt = cpu_to_le16(1 + n->params.max_ioqpairs - + n->params.sriov_vq_flexible); + cap->vqfrt = cpu_to_le32(n->params.sriov_vq_flexible); + cap->vqrfap = cap->vqfrt; + cap->vqgran = cpu_to_le16(NVME_VF_RES_GRANULARITY); + cap->vqfrsm = n->params.sriov_max_vq_per_vf ? + cpu_to_le16(n->params.sriov_max_vq_per_vf) : + cap->vqfrt / MAX(max_vfs, 1); + } + + if (pci_is_vf(&n->parent_obj)) { + cap->viprt = cpu_to_le16(n->conf_msix_qsize); + } else { + cap->viprt = cpu_to_le16(n->params.msix_qsize - + n->params.sriov_vi_flexible); + cap->vifrt = cpu_to_le32(n->params.sriov_vi_flexible); + cap->virfap = cap->vifrt; + cap->vigran = cpu_to_le16(NVME_VF_RES_GRANULARITY); + cap->vifrsm = n->params.sriov_max_vi_per_vf ? + cpu_to_le16(n->params.sriov_max_vi_per_vf) : + cap->vifrt / MAX(max_vfs, 1); + } } static void nvme_init_cmb(NvmeCtrl *n, PCIDevice *pci_dev) @@ -6758,11 +6869,14 @@ static uint64_t nvme_bar_size(unsigned total_queues, unsigned total_irqs, return bar_size; } -static void nvme_init_sriov(NvmeCtrl *n, PCIDevice *pci_dev, uint16_t offset, - uint64_t bar_size) +static void nvme_init_sriov(NvmeCtrl *n, PCIDevice *pci_dev, uint16_t offset) { uint16_t vf_dev_id = n->params.use_intel_id ? PCI_DEVICE_ID_INTEL_NVME : PCI_DEVICE_ID_REDHAT_NVME; + NvmePriCtrlCap *cap = &n->pri_ctrl_cap; + uint64_t bar_size = nvme_bar_size(le16_to_cpu(cap->vqfrsm), + le16_to_cpu(cap->vifrsm), + NULL, NULL); pcie_sriov_pf_init(pci_dev, offset, "nvme", vf_dev_id, n->params.sriov_max_vfs, n->params.sriov_max_vfs, @@ -6860,7 +6974,7 @@ static int nvme_init_pci(NvmeCtrl *n, PCIDevice *pci_dev, Error **errp) } if (!pci_is_vf(pci_dev) && n->params.sriov_max_vfs) { - nvme_init_sriov(n, pci_dev, 0x120, bar_size); + nvme_init_sriov(n, pci_dev, 0x120); } return 0; @@ -6884,6 +6998,7 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pci_dev) NvmeIdCtrl *id = &n->id_ctrl; uint8_t *pci_conf = pci_dev->config; uint64_t cap = ldq_le_p(&n->bar.cap); + NvmeSecCtrlEntry *sctrl = nvme_sctrl(n); id->vid = cpu_to_le16(pci_get_word(pci_conf + PCI_VENDOR_ID)); id->ssvid = cpu_to_le16(pci_get_word(pci_conf + PCI_SUBSYSTEM_VENDOR_ID)); @@ -6976,6 +7091,10 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pci_dev) stl_le_p(&n->bar.vs, NVME_SPEC_VER); n->bar.intmc = n->bar.intms = 0; + + if (pci_is_vf(&n->parent_obj) && !sctrl->scs) { + stl_le_p(&n->bar.csts, NVME_CSTS_FAILED); + } } static int nvme_init_subsys(NvmeCtrl *n, Error **errp) @@ -7116,6 +7235,14 @@ static Property nvme_props[] = { DEFINE_PROP_BOOL("zoned.auto_transition", NvmeCtrl, params.auto_transition_zones, true), DEFINE_PROP_UINT8("sriov_max_vfs", NvmeCtrl, params.sriov_max_vfs, 0), + DEFINE_PROP_UINT16("sriov_vq_flexible", NvmeCtrl, + params.sriov_vq_flexible, 0), + DEFINE_PROP_UINT16("sriov_vi_flexible", NvmeCtrl, + params.sriov_vi_flexible, 0), + DEFINE_PROP_UINT8("sriov_max_vi_per_vf", NvmeCtrl, + params.sriov_max_vi_per_vf, 0), + DEFINE_PROP_UINT8("sriov_max_vq_per_vf", NvmeCtrl, + params.sriov_max_vq_per_vf, 0), DEFINE_PROP_END_OF_LIST(), }; diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h index d9deb0b1ec43..9afa5e1a930a 100644 --- a/hw/nvme/nvme.h +++ b/hw/nvme/nvme.h @@ -412,6 +412,10 @@ typedef struct NvmeParams { bool auto_transition_zones; bool legacy_cmb; uint8_t sriov_max_vfs; + uint16_t sriov_vq_flexible; + uint16_t sriov_vi_flexible; + uint8_t sriov_max_vq_per_vf; + uint8_t sriov_max_vi_per_vf; } NvmeParams; typedef struct NvmeCtrl { diff --git a/include/block/nvme.h b/include/block/nvme.h index 94efd32578cb..58d08d5c2aaf 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -1576,6 +1576,11 @@ typedef struct QEMU_PACKED NvmePriCtrlCap { uint8_t rsvd80[4016]; } NvmePriCtrlCap; +typedef enum NvmePriCtrlCapCrt { + NVME_CRT_VQ = 1 << 0, + NVME_CRT_VI = 1 << 1, +} NvmePriCtrlCapCrt; + typedef struct QEMU_PACKED NvmeSecCtrlEntry { uint16_t scid; uint16_t pcid; From patchwork Thu Jun 23 21:34:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 12893215 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 51DF2C43334 for ; Thu, 23 Jun 2022 21:47:11 +0000 (UTC) Received: from localhost ([::1]:49660 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1o4UfO-0003QA-2V for qemu-devel@archiver.kernel.org; Thu, 23 Jun 2022 17:47:10 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55140) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UUC-00050P-Od; Thu, 23 Jun 2022 17:35:36 -0400 Received: from wout5-smtp.messagingengine.com ([64.147.123.21]:59589) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UU8-0006gG-S8; Thu, 23 Jun 2022 17:35:35 -0400 Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.west.internal (Postfix) with ESMTP id E711E3200973; Thu, 23 Jun 2022 17:35:29 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute5.internal (MEProxy); Thu, 23 Jun 2022 17:35:31 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=cc:cc:content-transfer-encoding:content-type:date:date:from :from:in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm1; t=1656020129; x= 1656106529; bh=PDiU05uj1+6ysKS1CkVvlRGC68cRsQtb9nbFH06SU2A=; b=U rkOhc4MeiW9kqOLMOvzgWseF+fTYTaemep2tHVHTxtuoB+ZJ8xyzaw4gm7QLLzdb CWDIzK2qjBu+TqDeUkAR3Rjd3eXjmpdoJksPHSDuWdihWHyB/8S2cB2rNA30gj/X nRJZrW3klyGmh4nYKY9hEN8K1hNz/I2rIPi0ZiIZpYw3o2bWN3KyHDciWI4Ppelt SaNi5naDGPA2sNsxaQGIvZK69fEAA2Xi0OrkI8T2ts8x2bPCuPl1NaMhj0BL4B9T WBSPzkGDTHNdnJEWIFzqOwWEdO0DsLmftZpG0EXxpwvH63/ra9kwUwAS3H3XXl5u kpfCx0veUuwu59i5R5EQw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1656020129; x= 1656106529; bh=PDiU05uj1+6ysKS1CkVvlRGC68cRsQtb9nbFH06SU2A=; b=s ljzb+5GhU/Va4G7Z1r/kjAjtUkAC9us7jhyqD5gkSO7vlVN+GbeDCyH79Vr58hNN hYFCYeBuh2DBOdytdxpD6PHKMKV9A7mH7yy75d5g3iC70ySveRGPlwPjoF0sISe8 Y5FgXq88dIOifPPYiHTvmC1JJ1N0LtfU2865Qecrm9buUES8+OOftvaBe8uipm9l iKkNult6vXM1uyeLkwoiTUjCHf6n2AuUoTFZpJrAUCwwzhSIqNY1iqOZ7jMzXlSs rxjZ8BDMaoAGvI5+JDOciHQQDVtXKVlJXOb+71Q0kFlBtilkTNiYuJptZL+xxO4a 5zFh37VW9LPGZtlBxWXiQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudefjedgudeifecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfgggtgfesthekredtredtjeenucfhrhhomhepmfhl rghushculfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrf grthhtvghrnhepfeevtdeuteeuudffvefggfdtfedtueelfffhieegffekgeefjeefffet jeeihfdvnecuvehluhhsthgvrhfuihiivgepvdenucfrrghrrghmpehmrghilhhfrhhomh epihhtshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: Feedback-ID: idc91472f:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 23 Jun 2022 17:35:27 -0400 (EDT) From: Klaus Jensen To: Peter Maydell , qemu-devel@nongnu.org Cc: Stefan Hajnoczi , Igor Mammedov , Ani Sinha , Hanna Reitz , Kevin Wolf , "Michael S. Tsirkin" , Klaus Jensen , qemu-block@nongnu.org, Keith Busch , Fam Zheng , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Marcel Apfelbaum , =?utf-8?q?=C5=81ukasz_Gieryk?= , Klaus Jensen Subject: [PULL 09/15] hw/nvme: Add support for the Virtualization Management command Date: Thu, 23 Jun 2022 23:34:36 +0200 Message-Id: <20220623213442.67789-10-its@irrelevant.dk> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220623213442.67789-1-its@irrelevant.dk> References: <20220623213442.67789-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.21; envelope-from=its@irrelevant.dk; helo=wout5-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Łukasz Gieryk With the new command one can: - assign flexible resources (queues, interrupts) to primary and secondary controllers, - toggle the online/offline state of given controller. Signed-off-by: Łukasz Gieryk Acked-by: Michael S. Tsirkin Reviewed-by: Klaus Jensen Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 257 ++++++++++++++++++++++++++++++++++++++++++- hw/nvme/nvme.h | 20 ++++ hw/nvme/trace-events | 3 + include/block/nvme.h | 17 +++ 4 files changed, 295 insertions(+), 2 deletions(-) diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index 3728813e90f0..20f1a7399592 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -188,6 +188,7 @@ #include "qemu/error-report.h" #include "qemu/log.h" #include "qemu/units.h" +#include "qemu/range.h" #include "qapi/error.h" #include "qapi/visitor.h" #include "sysemu/sysemu.h" @@ -262,6 +263,7 @@ static const uint32_t nvme_cse_acs[256] = { [NVME_ADM_CMD_GET_FEATURES] = NVME_CMD_EFF_CSUPP, [NVME_ADM_CMD_ASYNC_EV_REQ] = NVME_CMD_EFF_CSUPP, [NVME_ADM_CMD_NS_ATTACHMENT] = NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_NIC, + [NVME_ADM_CMD_VIRT_MNGMT] = NVME_CMD_EFF_CSUPP, [NVME_ADM_CMD_FORMAT_NVM] = NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_LBCC, }; @@ -293,6 +295,7 @@ static const uint32_t nvme_cse_iocs_zoned[256] = { }; static void nvme_process_sq(void *opaque); +static void nvme_ctrl_reset(NvmeCtrl *n, NvmeResetType rst); static uint16_t nvme_sqid(NvmeRequest *req) { @@ -5840,6 +5843,167 @@ out: return status; } +static void nvme_get_virt_res_num(NvmeCtrl *n, uint8_t rt, int *num_total, + int *num_prim, int *num_sec) +{ + *num_total = le32_to_cpu(rt ? + n->pri_ctrl_cap.vifrt : n->pri_ctrl_cap.vqfrt); + *num_prim = le16_to_cpu(rt ? + n->pri_ctrl_cap.virfap : n->pri_ctrl_cap.vqrfap); + *num_sec = le16_to_cpu(rt ? n->pri_ctrl_cap.virfa : n->pri_ctrl_cap.vqrfa); +} + +static uint16_t nvme_assign_virt_res_to_prim(NvmeCtrl *n, NvmeRequest *req, + uint16_t cntlid, uint8_t rt, + int nr) +{ + int num_total, num_prim, num_sec; + + if (cntlid != n->cntlid) { + return NVME_INVALID_CTRL_ID | NVME_DNR; + } + + nvme_get_virt_res_num(n, rt, &num_total, &num_prim, &num_sec); + + if (nr > num_total) { + return NVME_INVALID_NUM_RESOURCES | NVME_DNR; + } + + if (nr > num_total - num_sec) { + return NVME_INVALID_RESOURCE_ID | NVME_DNR; + } + + if (rt) { + n->next_pri_ctrl_cap.virfap = cpu_to_le16(nr); + } else { + n->next_pri_ctrl_cap.vqrfap = cpu_to_le16(nr); + } + + req->cqe.result = cpu_to_le32(nr); + return req->status; +} + +static void nvme_update_virt_res(NvmeCtrl *n, NvmeSecCtrlEntry *sctrl, + uint8_t rt, int nr) +{ + int prev_nr, prev_total; + + if (rt) { + prev_nr = le16_to_cpu(sctrl->nvi); + prev_total = le32_to_cpu(n->pri_ctrl_cap.virfa); + sctrl->nvi = cpu_to_le16(nr); + n->pri_ctrl_cap.virfa = cpu_to_le32(prev_total + nr - prev_nr); + } else { + prev_nr = le16_to_cpu(sctrl->nvq); + prev_total = le32_to_cpu(n->pri_ctrl_cap.vqrfa); + sctrl->nvq = cpu_to_le16(nr); + n->pri_ctrl_cap.vqrfa = cpu_to_le32(prev_total + nr - prev_nr); + } +} + +static uint16_t nvme_assign_virt_res_to_sec(NvmeCtrl *n, NvmeRequest *req, + uint16_t cntlid, uint8_t rt, int nr) +{ + int num_total, num_prim, num_sec, num_free, diff, limit; + NvmeSecCtrlEntry *sctrl; + + sctrl = nvme_sctrl_for_cntlid(n, cntlid); + if (!sctrl) { + return NVME_INVALID_CTRL_ID | NVME_DNR; + } + + if (sctrl->scs) { + return NVME_INVALID_SEC_CTRL_STATE | NVME_DNR; + } + + limit = le16_to_cpu(rt ? n->pri_ctrl_cap.vifrsm : n->pri_ctrl_cap.vqfrsm); + if (nr > limit) { + return NVME_INVALID_NUM_RESOURCES | NVME_DNR; + } + + nvme_get_virt_res_num(n, rt, &num_total, &num_prim, &num_sec); + num_free = num_total - num_prim - num_sec; + diff = nr - le16_to_cpu(rt ? sctrl->nvi : sctrl->nvq); + + if (diff > num_free) { + return NVME_INVALID_RESOURCE_ID | NVME_DNR; + } + + nvme_update_virt_res(n, sctrl, rt, nr); + req->cqe.result = cpu_to_le32(nr); + + return req->status; +} + +static uint16_t nvme_virt_set_state(NvmeCtrl *n, uint16_t cntlid, bool online) +{ + NvmeCtrl *sn = NULL; + NvmeSecCtrlEntry *sctrl; + int vf_index; + + sctrl = nvme_sctrl_for_cntlid(n, cntlid); + if (!sctrl) { + return NVME_INVALID_CTRL_ID | NVME_DNR; + } + + if (!pci_is_vf(&n->parent_obj)) { + vf_index = le16_to_cpu(sctrl->vfn) - 1; + sn = NVME(pcie_sriov_get_vf_at_index(&n->parent_obj, vf_index)); + } + + if (online) { + if (!sctrl->nvi || (le16_to_cpu(sctrl->nvq) < 2) || !sn) { + return NVME_INVALID_SEC_CTRL_STATE | NVME_DNR; + } + + if (!sctrl->scs) { + sctrl->scs = 0x1; + nvme_ctrl_reset(sn, NVME_RESET_FUNCTION); + } + } else { + nvme_update_virt_res(n, sctrl, NVME_VIRT_RES_INTERRUPT, 0); + nvme_update_virt_res(n, sctrl, NVME_VIRT_RES_QUEUE, 0); + + if (sctrl->scs) { + sctrl->scs = 0x0; + if (sn) { + nvme_ctrl_reset(sn, NVME_RESET_FUNCTION); + } + } + } + + return NVME_SUCCESS; +} + +static uint16_t nvme_virt_mngmt(NvmeCtrl *n, NvmeRequest *req) +{ + uint32_t dw10 = le32_to_cpu(req->cmd.cdw10); + uint32_t dw11 = le32_to_cpu(req->cmd.cdw11); + uint8_t act = dw10 & 0xf; + uint8_t rt = (dw10 >> 8) & 0x7; + uint16_t cntlid = (dw10 >> 16) & 0xffff; + int nr = dw11 & 0xffff; + + trace_pci_nvme_virt_mngmt(nvme_cid(req), act, cntlid, rt ? "VI" : "VQ", nr); + + if (rt != NVME_VIRT_RES_QUEUE && rt != NVME_VIRT_RES_INTERRUPT) { + return NVME_INVALID_RESOURCE_ID | NVME_DNR; + } + + switch (act) { + case NVME_VIRT_MNGMT_ACTION_SEC_ASSIGN: + return nvme_assign_virt_res_to_sec(n, req, cntlid, rt, nr); + case NVME_VIRT_MNGMT_ACTION_PRM_ALLOC: + return nvme_assign_virt_res_to_prim(n, req, cntlid, rt, nr); + case NVME_VIRT_MNGMT_ACTION_SEC_ONLINE: + return nvme_virt_set_state(n, cntlid, true); + case NVME_VIRT_MNGMT_ACTION_SEC_OFFLINE: + return nvme_virt_set_state(n, cntlid, false); + default: + return NVME_INVALID_FIELD | NVME_DNR; + } +} + static uint16_t nvme_admin_cmd(NvmeCtrl *n, NvmeRequest *req) { trace_pci_nvme_admin_cmd(nvme_cid(req), nvme_sqid(req), req->cmd.opcode, @@ -5882,6 +6046,8 @@ static uint16_t nvme_admin_cmd(NvmeCtrl *n, NvmeRequest *req) return nvme_aer(n, req); case NVME_ADM_CMD_NS_ATTACHMENT: return nvme_ns_attachment(n, req); + case NVME_ADM_CMD_VIRT_MNGMT: + return nvme_virt_mngmt(n, req); case NVME_ADM_CMD_FORMAT_NVM: return nvme_format(n, req); default: @@ -5943,9 +6109,33 @@ static void nvme_update_msixcap_ts(PCIDevice *pci_dev, uint32_t table_size) table_size - 1); } +static void nvme_activate_virt_res(NvmeCtrl *n) +{ + PCIDevice *pci_dev = &n->parent_obj; + NvmePriCtrlCap *cap = &n->pri_ctrl_cap; + NvmeSecCtrlEntry *sctrl; + + /* -1 to account for the admin queue */ + if (pci_is_vf(pci_dev)) { + sctrl = nvme_sctrl(n); + cap->vqprt = sctrl->nvq; + cap->viprt = sctrl->nvi; + n->conf_ioqpairs = sctrl->nvq ? le16_to_cpu(sctrl->nvq) - 1 : 0; + n->conf_msix_qsize = sctrl->nvi ? le16_to_cpu(sctrl->nvi) : 1; + } else { + cap->vqrfap = n->next_pri_ctrl_cap.vqrfap; + cap->virfap = n->next_pri_ctrl_cap.virfap; + n->conf_ioqpairs = le16_to_cpu(cap->vqprt) + + le16_to_cpu(cap->vqrfap) - 1; + n->conf_msix_qsize = le16_to_cpu(cap->viprt) + + le16_to_cpu(cap->virfap); + } +} + static void nvme_ctrl_reset(NvmeCtrl *n, NvmeResetType rst) { PCIDevice *pci_dev = &n->parent_obj; + NvmeSecCtrlEntry *sctrl; NvmeNamespace *ns; int i; @@ -5975,9 +6165,20 @@ static void nvme_ctrl_reset(NvmeCtrl *n, NvmeResetType rst) g_free(event); } - if (!pci_is_vf(pci_dev) && n->params.sriov_max_vfs) { + if (n->params.sriov_max_vfs) { + if (!pci_is_vf(pci_dev)) { + for (i = 0; i < n->sec_ctrl_list.numcntl; i++) { + sctrl = &n->sec_ctrl_list.sec[i]; + nvme_virt_set_state(n, le16_to_cpu(sctrl->scid), false); + } + + if (rst != NVME_RESET_CONTROLLER) { + pcie_sriov_pf_disable_vfs(pci_dev); + } + } + if (rst != NVME_RESET_CONTROLLER) { - pcie_sriov_pf_disable_vfs(pci_dev); + nvme_activate_virt_res(n); } } @@ -5986,6 +6187,13 @@ static void nvme_ctrl_reset(NvmeCtrl *n, NvmeResetType rst) n->qs_created = false; nvme_update_msixcap_ts(pci_dev, n->conf_msix_qsize); + + if (pci_is_vf(pci_dev)) { + sctrl = nvme_sctrl(n); + stl_le_p(&n->bar.csts, sctrl->scs ? 0 : NVME_CSTS_FAILED); + } else { + stl_le_p(&n->bar.csts, 0); + } } static void nvme_ctrl_shutdown(NvmeCtrl *n) @@ -6031,7 +6239,15 @@ static int nvme_start_ctrl(NvmeCtrl *n) uint64_t acq = ldq_le_p(&n->bar.acq); uint32_t page_bits = NVME_CC_MPS(cc) + 12; uint32_t page_size = 1 << page_bits; + NvmeSecCtrlEntry *sctrl = nvme_sctrl(n); + if (pci_is_vf(&n->parent_obj) && !sctrl->scs) { + trace_pci_nvme_err_startfail_virt_state(le16_to_cpu(sctrl->nvi), + le16_to_cpu(sctrl->nvq), + sctrl->scs ? "ONLINE" : + "OFFLINE"); + return -1; + } if (unlikely(n->cq[0])) { trace_pci_nvme_err_startfail_cq(); return -1; @@ -6414,6 +6630,12 @@ static uint64_t nvme_mmio_read(void *opaque, hwaddr addr, unsigned size) return 0; } + if (pci_is_vf(&n->parent_obj) && !nvme_sctrl(n)->scs && + addr != NVME_REG_CSTS) { + trace_pci_nvme_err_ignored_mmio_vf_offline(addr, size); + return 0; + } + /* * When PMRWBM bit 1 is set then read from * from PMRSTS should ensure prior writes @@ -6563,6 +6785,12 @@ static void nvme_mmio_write(void *opaque, hwaddr addr, uint64_t data, trace_pci_nvme_mmio_write(addr, data, size); + if (pci_is_vf(&n->parent_obj) && !nvme_sctrl(n)->scs && + addr != NVME_REG_CSTS) { + trace_pci_nvme_err_ignored_mmio_vf_offline(addr, size); + return; + } + if (addr < sizeof(n->bar)) { nvme_write_bar(n, addr, data, size); } else { @@ -7297,9 +7525,34 @@ static void nvme_pci_reset(DeviceState *qdev) nvme_ctrl_reset(n, NVME_RESET_FUNCTION); } +static void nvme_sriov_pre_write_ctrl(PCIDevice *dev, uint32_t address, + uint32_t val, int len) +{ + NvmeCtrl *n = NVME(dev); + NvmeSecCtrlEntry *sctrl; + uint16_t sriov_cap = dev->exp.sriov_cap; + uint32_t off = address - sriov_cap; + int i, num_vfs; + + if (!sriov_cap) { + return; + } + + if (range_covers_byte(off, len, PCI_SRIOV_CTRL)) { + if (!(val & PCI_SRIOV_CTRL_VFE)) { + num_vfs = pci_get_word(dev->config + sriov_cap + PCI_SRIOV_NUM_VF); + for (i = 0; i < num_vfs; i++) { + sctrl = &n->sec_ctrl_list.sec[i]; + nvme_virt_set_state(n, le16_to_cpu(sctrl->scid), false); + } + } + } +} + static void nvme_pci_write_config(PCIDevice *dev, uint32_t address, uint32_t val, int len) { + nvme_sriov_pre_write_ctrl(dev, address, val, len); pci_default_write_config(dev, address, val, len); pcie_cap_flr_write_config(dev, address, val, len); } diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h index 9afa5e1a930a..99437d39bb51 100644 --- a/hw/nvme/nvme.h +++ b/hw/nvme/nvme.h @@ -340,6 +340,7 @@ static inline const char *nvme_adm_opc_str(uint8_t opc) case NVME_ADM_CMD_GET_FEATURES: return "NVME_ADM_CMD_GET_FEATURES"; case NVME_ADM_CMD_ASYNC_EV_REQ: return "NVME_ADM_CMD_ASYNC_EV_REQ"; case NVME_ADM_CMD_NS_ATTACHMENT: return "NVME_ADM_CMD_NS_ATTACHMENT"; + case NVME_ADM_CMD_VIRT_MNGMT: return "NVME_ADM_CMD_VIRT_MNGMT"; case NVME_ADM_CMD_FORMAT_NVM: return "NVME_ADM_CMD_FORMAT_NVM"; default: return "NVME_ADM_CMD_UNKNOWN"; } @@ -491,6 +492,10 @@ typedef struct NvmeCtrl { NvmePriCtrlCap pri_ctrl_cap; NvmeSecCtrlList sec_ctrl_list; + struct { + uint16_t vqrfap; + uint16_t virfap; + } next_pri_ctrl_cap; /* These override pri_ctrl_cap after reset */ } NvmeCtrl; typedef enum NvmeResetType { @@ -542,6 +547,21 @@ static inline NvmeSecCtrlEntry *nvme_sctrl(NvmeCtrl *n) return NULL; } +static inline NvmeSecCtrlEntry *nvme_sctrl_for_cntlid(NvmeCtrl *n, + uint16_t cntlid) +{ + NvmeSecCtrlList *list = &n->sec_ctrl_list; + uint8_t i; + + for (i = 0; i < list->numcntl; i++) { + if (le16_to_cpu(list->sec[i].scid) == cntlid) { + return &list->sec[i]; + } + } + + return NULL; +} + void nvme_attach_ns(NvmeCtrl *n, NvmeNamespace *ns); uint16_t nvme_bounce_data(NvmeCtrl *n, void *ptr, uint32_t len, NvmeTxDirection dir, NvmeRequest *req); diff --git a/hw/nvme/trace-events b/hw/nvme/trace-events index b07864c57322..065e1c891df4 100644 --- a/hw/nvme/trace-events +++ b/hw/nvme/trace-events @@ -111,6 +111,7 @@ pci_nvme_clear_ns_close(uint32_t state, uint64_t slba) "zone state=%"PRIu32", sl pci_nvme_clear_ns_reset(uint32_t state, uint64_t slba) "zone state=%"PRIu32", slba=%"PRIu64" transitioned to Empty state" pci_nvme_zoned_zrwa_implicit_flush(uint64_t zslba, uint32_t nlb) "zslba 0x%"PRIx64" nlb %"PRIu32"" pci_nvme_pci_reset(void) "PCI Function Level Reset" +pci_nvme_virt_mngmt(uint16_t cid, uint16_t act, uint16_t cntlid, const char* rt, uint16_t nr) "cid %"PRIu16", act=0x%"PRIx16", ctrlid=%"PRIu16" %s nr=%"PRIu16"" # error conditions pci_nvme_err_mdts(size_t len) "len %zu" @@ -180,7 +181,9 @@ pci_nvme_err_startfail_asqent_sz_zero(void) "nvme_start_ctrl failed because the pci_nvme_err_startfail_acqent_sz_zero(void) "nvme_start_ctrl failed because the admin completion queue size is zero" pci_nvme_err_startfail_zasl_too_small(uint32_t zasl, uint32_t pagesz) "nvme_start_ctrl failed because zone append size limit %"PRIu32" is too small, needs to be >= %"PRIu32"" pci_nvme_err_startfail(void) "setting controller enable bit failed" +pci_nvme_err_startfail_virt_state(uint16_t vq, uint16_t vi, const char *state) "nvme_start_ctrl failed due to ctrl state: vi=%u vq=%u %s" pci_nvme_err_invalid_mgmt_action(uint8_t action) "action=0x%"PRIx8"" +pci_nvme_err_ignored_mmio_vf_offline(uint64_t addr, unsigned size) "addr 0x%"PRIx64" size %d" # undefined behavior pci_nvme_ub_mmiowr_misaligned32(uint64_t offset) "MMIO write not 32-bit aligned, offset=0x%"PRIx64"" diff --git a/include/block/nvme.h b/include/block/nvme.h index 58d08d5c2aaf..373c70b5ca7f 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -595,6 +595,7 @@ enum NvmeAdminCommands { NVME_ADM_CMD_ACTIVATE_FW = 0x10, NVME_ADM_CMD_DOWNLOAD_FW = 0x11, NVME_ADM_CMD_NS_ATTACHMENT = 0x15, + NVME_ADM_CMD_VIRT_MNGMT = 0x1c, NVME_ADM_CMD_FORMAT_NVM = 0x80, NVME_ADM_CMD_SECURITY_SEND = 0x81, NVME_ADM_CMD_SECURITY_RECV = 0x82, @@ -899,6 +900,10 @@ enum NvmeStatusCodes { NVME_NS_PRIVATE = 0x0119, NVME_NS_NOT_ATTACHED = 0x011a, NVME_NS_CTRL_LIST_INVALID = 0x011c, + NVME_INVALID_CTRL_ID = 0x011f, + NVME_INVALID_SEC_CTRL_STATE = 0x0120, + NVME_INVALID_NUM_RESOURCES = 0x0121, + NVME_INVALID_RESOURCE_ID = 0x0122, NVME_CONFLICTING_ATTRS = 0x0180, NVME_INVALID_PROT_INFO = 0x0181, NVME_WRITE_TO_RO = 0x0182, @@ -1598,6 +1603,18 @@ typedef struct QEMU_PACKED NvmeSecCtrlList { NvmeSecCtrlEntry sec[127]; } NvmeSecCtrlList; +typedef enum NvmeVirtMngmtAction { + NVME_VIRT_MNGMT_ACTION_PRM_ALLOC = 0x01, + NVME_VIRT_MNGMT_ACTION_SEC_OFFLINE = 0x07, + NVME_VIRT_MNGMT_ACTION_SEC_ASSIGN = 0x08, + NVME_VIRT_MNGMT_ACTION_SEC_ONLINE = 0x09, +} NvmeVirtMngmtAction; + +typedef enum NvmeVirtualResourceType { + NVME_VIRT_RES_QUEUE = 0x00, + NVME_VIRT_RES_INTERRUPT = 0x01, +} NvmeVirtualResourceType; + static inline void _nvme_check_size(void) { QEMU_BUILD_BUG_ON(sizeof(NvmeBar) != 4096); From patchwork Thu Jun 23 21:34:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 12893220 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BE1F3C433EF for ; Thu, 23 Jun 2022 21:51:12 +0000 (UTC) Received: from localhost ([::1]:59570 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1o4UjH-0001nO-T9 for qemu-devel@archiver.kernel.org; Thu, 23 Jun 2022 17:51:11 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55164) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UUF-000528-6k; Thu, 23 Jun 2022 17:35:40 -0400 Received: from wout5-smtp.messagingengine.com ([64.147.123.21]:56791) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UUD-0006gQ-Ei; Thu, 23 Jun 2022 17:35:38 -0400 Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.west.internal (Postfix) with ESMTP id 84FA23200984; Thu, 23 Jun 2022 17:35:34 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute5.internal (MEProxy); Thu, 23 Jun 2022 17:35:35 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=cc:cc:content-transfer-encoding:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm1; t=1656020134; x= 1656106534; bh=xB85BpHO8WPRA7kFCn2WZ19G2Wh0p1AexZXvQjcGgcQ=; b=b fGYg3PNJ/d9rQ3M6VU0ijxEFYaIqUd0fST3xTSeooxyDHRhMuRUkVIxab4dL2VRF mezVoJDKsEVPs+TjgieDfvqaR55w/2xNBNUsJg8pjkm5yYJoLV1PNoP2pRyvZo59 oEVuy4cJhoJjT+WjRzTkNoxRgfhgh5kOgw5o15AQVmwGpeWGealbxC5l5Jjpbrbi YqAaEW8VQRNFQrDReVXWdc3f7Eb01dqMvPUbJUkNznyBYf+T0tH5nUP0wlASH7oz V3qiiLL46xbGf8FL2IImoTS9FUSepdu/7SHJ4xIkpe1/tTOIVJWV35ukFglrt1r0 TEB8yHulQRWJHlapLWHgw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:sender:subject :subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; t=1656020134; x=1656106534; bh=xB85BpHO8WPRA 7kFCn2WZ19G2Wh0p1AexZXvQjcGgcQ=; b=D9B30MvHUwBqbj6SxdqgvlBx5R0HE w13iRmAvxrw+3DEKxPL5L857CU6Gz2nvP7Kymxd3ThRSvnUJSdXd4q5YYMArye6H aJBVk6KlhRjR6RqHQE+PhbzJMv7w7trCmQqNH83eA0HKMWj6BmCU8vW7sIgIhuvz VuuUWUf2eFtItqFNGsr3OwbSi97FI2PLENLVaBmXnvQzEvz9BwlpTTIHRxvHcQUL cVbjl+yfnoLlS67NVrV/kq+WTDf/lA5Oo7x/Os/gCOrjeJpUPAXXtVlW5jN8hQ1D 1rHF6fKJhji9FFkXwk7YS4tulpBE7ierUvdvEfUbnWJfTM6LcJ395CE1g== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudefjedgudeifecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpefmlhgr uhhsucflvghnshgvnhcuoehithhssehirhhrvghlvghvrghnthdrughkqeenucggtffrrg htthgvrhhnpeejgfeilefgieevheekueevheehkeefveegiefgheefgfejjeehffefgedu jedugeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpe hithhssehirhhrvghlvghvrghnthdrughk X-ME-Proxy: Feedback-ID: idc91472f:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 23 Jun 2022 17:35:31 -0400 (EDT) From: Klaus Jensen To: Peter Maydell , qemu-devel@nongnu.org Cc: Stefan Hajnoczi , Igor Mammedov , Ani Sinha , Hanna Reitz , Kevin Wolf , "Michael S. Tsirkin" , Klaus Jensen , qemu-block@nongnu.org, Keith Busch , Fam Zheng , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Marcel Apfelbaum , Lukasz Maniak , Klaus Jensen Subject: [PULL 10/15] docs: Add documentation for SR-IOV and Virtualization Enhancements Date: Thu, 23 Jun 2022 23:34:37 +0200 Message-Id: <20220623213442.67789-11-its@irrelevant.dk> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220623213442.67789-1-its@irrelevant.dk> References: <20220623213442.67789-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.21; envelope-from=its@irrelevant.dk; helo=wout5-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Lukasz Maniak Documentation describes 5 new parameters being added regarding SR-IOV: sriov_max_vfs sriov_vq_flexible sriov_vi_flexible sriov_max_vi_per_vf sriov_max_vq_per_vf The description also includes the simplest possible QEMU invocation and the series of NVMe commands required to enable SR-IOV support. Signed-off-by: Lukasz Maniak Acked-by: Michael S. Tsirkin Reviewed-by: Klaus Jensen Signed-off-by: Klaus Jensen --- docs/system/devices/nvme.rst | 82 ++++++++++++++++++++++++++++++++++++ 1 file changed, 82 insertions(+) diff --git a/docs/system/devices/nvme.rst b/docs/system/devices/nvme.rst index b5acb2a9c19d..aba253304e46 100644 --- a/docs/system/devices/nvme.rst +++ b/docs/system/devices/nvme.rst @@ -239,3 +239,85 @@ The virtual namespace device supports DIF- and DIX-based protection information to ``1`` to transfer protection information as the first eight bytes of metadata. Otherwise, the protection information is transferred as the last eight bytes. + +Virtualization Enhancements and SR-IOV (Experimental Support) +------------------------------------------------------------- + +The ``nvme`` device supports Single Root I/O Virtualization and Sharing +along with Virtualization Enhancements. The controller has to be linked to +an NVM Subsystem device (``nvme-subsys``) for use with SR-IOV. + +A number of parameters are present (**please note, that they may be +subject to change**): + +``sriov_max_vfs`` (default: ``0``) + Indicates the maximum number of PCIe virtual functions supported + by the controller. Specifying a non-zero value enables reporting of both + SR-IOV and ARI (Alternative Routing-ID Interpretation) capabilities + by the NVMe device. Virtual function controllers will not report SR-IOV. + +``sriov_vq_flexible`` + Indicates the total number of flexible queue resources assignable to all + the secondary controllers. Implicitly sets the number of primary + controller's private resources to ``(max_ioqpairs - sriov_vq_flexible)``. + +``sriov_vi_flexible`` + Indicates the total number of flexible interrupt resources assignable to + all the secondary controllers. Implicitly sets the number of primary + controller's private resources to ``(msix_qsize - sriov_vi_flexible)``. + +``sriov_max_vi_per_vf`` (default: ``0``) + Indicates the maximum number of virtual interrupt resources assignable + to a secondary controller. The default ``0`` resolves to + ``(sriov_vi_flexible / sriov_max_vfs)`` + +``sriov_max_vq_per_vf`` (default: ``0``) + Indicates the maximum number of virtual queue resources assignable to + a secondary controller. The default ``0`` resolves to + ``(sriov_vq_flexible / sriov_max_vfs)`` + +The simplest possible invocation enables the capability to set up one VF +controller and assign an admin queue, an IO queue, and a MSI-X interrupt. + +.. code-block:: console + + -device nvme-subsys,id=subsys0 + -device nvme,serial=deadbeef,subsys=subsys0,sriov_max_vfs=1, + sriov_vq_flexible=2,sriov_vi_flexible=1 + +The minimum steps required to configure a functional NVMe secondary +controller are: + + * unbind flexible resources from the primary controller + +.. code-block:: console + + nvme virt-mgmt /dev/nvme0 -c 0 -r 1 -a 1 -n 0 + nvme virt-mgmt /dev/nvme0 -c 0 -r 0 -a 1 -n 0 + + * perform a Function Level Reset on the primary controller to actually + release the resources + +.. code-block:: console + + echo 1 > /sys/bus/pci/devices/0000:01:00.0/reset + + * enable VF + +.. code-block:: console + + echo 1 > /sys/bus/pci/devices/0000:01:00.0/sriov_numvfs + + * assign the flexible resources to the VF and set it ONLINE + +.. code-block:: console + + nvme virt-mgmt /dev/nvme0 -c 1 -r 1 -a 8 -n 1 + nvme virt-mgmt /dev/nvme0 -c 1 -r 0 -a 8 -n 2 + nvme virt-mgmt /dev/nvme0 -c 1 -r 0 -a 9 -n 0 + + * bind the NVMe driver to the VF + +.. code-block:: console + + echo 0000:01:00.1 > /sys/bus/pci/drivers/nvme/bind \ No newline at end of file From patchwork Thu Jun 23 21:34:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 12893218 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C58ADC433EF for ; Thu, 23 Jun 2022 21:49:47 +0000 (UTC) Received: from localhost ([::1]:56148 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1o4Uhu-0007nu-Nu for qemu-devel@archiver.kernel.org; Thu, 23 Jun 2022 17:49:46 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55188) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UUL-00054R-2I; Thu, 23 Jun 2022 17:35:45 -0400 Received: from wout5-smtp.messagingengine.com ([64.147.123.21]:37405) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UUI-0006hB-3i; Thu, 23 Jun 2022 17:35:43 -0400 Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailout.west.internal (Postfix) with ESMTP id 3CE8C3200980; Thu, 23 Jun 2022 17:35:39 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Thu, 23 Jun 2022 17:35:40 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=cc:cc:content-transfer-encoding:content-type:date:date:from :from:in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm1; t=1656020138; x= 1656106538; bh=dFOpANDDNYBPxrDO7dEroeNRMyqlauLrX98CaBQGQiQ=; b=Q E5Dy+RhWlh4Oc5rz+4LMT8TtBN0crE1OGjrQcBrK359FYj+rvnmLMZshvOLW+zAq loF2E0aQ36K49zDcXDNPSts66vghnjlnr7pWndfb999dlfYb/zeS0hZKyYqHkwuZ mb7h5A6MmlT2d1Q4E5FSD3940QtLx0mD4nEymPScr80dBck3md9hYkjJJ/JusxEh Kyr7a0IJHjB92c3TkZxikXlBHw9YUUXNM1q3AeRbkJP8lLvS7w1/We/E6cmKcuDS CHVQje6CCoA5DdkSZW04JtT7lA6vfO9HBr72+Ckt0Ln0tKBskqGV7/qb691vRG46 u5EFRnJUlu3f/eNo1bK0w== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1656020138; x= 1656106538; bh=dFOpANDDNYBPxrDO7dEroeNRMyqlauLrX98CaBQGQiQ=; b=s 8LSfPEK2NV4rT0oEWv/v887n0rAKYNJm3j9xOvHRfbBHD2eu+tojU+bQT8GuGd+c KvI38TIpnY6rrZ4zIEFUKoY7b2pcLhyZwJLeuvaQdZzqbAYOoMLTXDqc/F7+5I28 sGil3t2gWPGE6Woe6CaTE6G35gJ7mIRU7ONhGjql7QuLMCJBlTQZDm9hJITJ/LXy Sm3S1oSdPbPXdv6mdfognbS/Sy73f6B3GLdaxffH7m/hS9iPODqVbKV6X9gxMyOf eK8c6H1rW6efxUZ8w4IbxbRhXptSFW9WYXX9XBezFlD1ruOAhjeipU9PRcMgRI7H Ly2ltKvKFKnArnKCykAGA== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudefjedgudeigecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfgggtgfesthekredtredtjeenucfhrhhomhepmfhl rghushculfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrf grthhtvghrnhepfeevtdeuteeuudffvefggfdtfedtueelfffhieegffekgeefjeefffet jeeihfdvnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomh epihhtshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: Feedback-ID: idc91472f:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 23 Jun 2022 17:35:36 -0400 (EDT) From: Klaus Jensen To: Peter Maydell , qemu-devel@nongnu.org Cc: Stefan Hajnoczi , Igor Mammedov , Ani Sinha , Hanna Reitz , Kevin Wolf , "Michael S. Tsirkin" , Klaus Jensen , qemu-block@nongnu.org, Keith Busch , Fam Zheng , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Marcel Apfelbaum , =?utf-8?q?=C5=81ukasz_Gieryk?= , Klaus Jensen Subject: [PULL 11/15] hw/nvme: Update the initalization place for the AER queue Date: Thu, 23 Jun 2022 23:34:38 +0200 Message-Id: <20220623213442.67789-12-its@irrelevant.dk> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220623213442.67789-1-its@irrelevant.dk> References: <20220623213442.67789-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.21; envelope-from=its@irrelevant.dk; helo=wout5-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Łukasz Gieryk This patch updates the initialization place for the AER queue, so it’s initialized once, at controller initialization, and not every time controller is enabled. While the original version works for a non-SR-IOV device, as it’s hard to interact with the controller if it’s not enabled, the multiple reinitialization is not necessarily correct. With the SR/IOV feature enabled a segfault can happen: a VF can have its controller disabled, while a namespace can still be attached to the controller through the parent PF. An event generated in such case ends up on an uninitialized queue. While it’s an interesting question whether a VF should support AER in the first place, I don’t think it must be answered today. Signed-off-by: Łukasz Gieryk Reviewed-by: Klaus Jensen Acked-by: Michael S. Tsirkin Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index 20f1a7399592..658584d417fe 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -6328,8 +6328,6 @@ static int nvme_start_ctrl(NvmeCtrl *n) nvme_set_timestamp(n, 0ULL); - QTAILQ_INIT(&n->aer_queue); - nvme_select_iocs(n); return 0; @@ -6989,6 +6987,7 @@ static void nvme_init_state(NvmeCtrl *n) n->features.temp_thresh_hi = NVME_TEMPERATURE_WARNING; n->starttime_ms = qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL); n->aer_reqs = g_new0(NvmeRequest *, n->params.aerl + 1); + QTAILQ_INIT(&n->aer_queue); list->numcntl = cpu_to_le16(max_vfs); for (i = 0; i < max_vfs; i++) { From patchwork Thu Jun 23 21:34:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 12893221 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 17CB1C433EF for ; Thu, 23 Jun 2022 21:54:10 +0000 (UTC) Received: from localhost ([::1]:38770 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1o4UmA-0006rh-3e for qemu-devel@archiver.kernel.org; Thu, 23 Jun 2022 17:54:10 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55218) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UUO-00055r-F6; Thu, 23 Jun 2022 17:35:49 -0400 Received: from wout5-smtp.messagingengine.com ([64.147.123.21]:57607) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UUM-0006hT-Lm; Thu, 23 Jun 2022 17:35:48 -0400 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id C08BD320098C; Thu, 23 Jun 2022 17:35:43 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 23 Jun 2022 17:35:45 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=cc:cc:content-transfer-encoding:content-type:date:date:from :from:in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm1; t=1656020143; x= 1656106543; bh=h4o7AOFZwCbAGKNwqBDwSEl4y0eFfp1Y4AdI/3OaUNQ=; b=g NIZuiAmN1ihdaVcTuWF3TBjfPZXosiL2OCRTO5UhiBx1Mz60K1LccbK52Rl2+Zwj EMngF8hrY1qm4QxN4fuyfy80qI/HxmjoK/HRZpyv80XsCbxXjncB6ycTMmquD062 r5sAHY2y9t/oLM5fkMyNlUM7VS/M/N/GiTvk5UZHB5I7U2pJmWMNBKDySBiC/3wi RmUG5f2A1pubFb4uVWxL67ykvX2bausfX1g3P8PrHtpZ+iFmIwZd4oASr2tmWwTZ A5inzZ0j8Hu3pid4zmpMfRcO1Q3cndRndiikS7VhylCwl5v2IYw9P9edIv8aaR6a spOUugigEJr+hArSj0NzA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1656020143; x= 1656106543; bh=h4o7AOFZwCbAGKNwqBDwSEl4y0eFfp1Y4AdI/3OaUNQ=; b=u ppeQCRcUkYLIWAFugaPlbKlOh9BGt/HD0G4VlA6Vk5ZacmJ/YLYY/v1JRaxvVsgV q1Fe/L0q6xB2kTh9vDss7ykYEM/SPv2l/jWGFKI3HqtQ/LFNiDI1mWHSq961HSMW euO+KimWUyUej96wYF1uf706osy+4bwCJLfhp6z0zHvlhMz38fqDAOUWdwmPOBrW SXmqEOzktnJxsX0V/wfkEfU2EpvzintqQ0gFSvvC9494Q6c6GtE/qO7H0Lp559M0 ZJjqxgCtuQEVXmz7Yc5MNQI1zPfE3RzAd23tLfhZpRK8ykA3tElc4mzZDPyY49Zg yafhaUiCp97nZH2IegBcg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudefkecutefuodetggdotefrodftvfcurf hrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfgggtgfesthekredtredtjeenucfhrhhomhepmfhlrghushcu lfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrthhtvg hrnhepfeevtdeuteeuudffvefggfdtfedtueelfffhieegffekgeefjeefffetjeeihfdv necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepihhtsh esihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: Feedback-ID: idc91472f:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 23 Jun 2022 17:35:40 -0400 (EDT) From: Klaus Jensen To: Peter Maydell , qemu-devel@nongnu.org Cc: Stefan Hajnoczi , Igor Mammedov , Ani Sinha , Hanna Reitz , Kevin Wolf , "Michael S. Tsirkin" , Klaus Jensen , qemu-block@nongnu.org, Keith Busch , Fam Zheng , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Marcel Apfelbaum , =?utf-8?q?=C5=81ukasz_Gieryk?= , Klaus Jensen Subject: [PULL 12/15] hw/acpi: Make the PCI hot-plug aware of SR-IOV Date: Thu, 23 Jun 2022 23:34:39 +0200 Message-Id: <20220623213442.67789-13-its@irrelevant.dk> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220623213442.67789-1-its@irrelevant.dk> References: <20220623213442.67789-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.21; envelope-from=its@irrelevant.dk; helo=wout5-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Łukasz Gieryk PCI device capable of SR-IOV support is a new, still-experimental feature with only a single working example of the Nvme device. This patch in an attempt to fix a double-free problem when a SR-IOV-capable Nvme device is hot-unplugged in the following scenario: Qemu CLI: --------- -device pcie-root-port,slot=0,id=rp0 -device nvme-subsys,id=subsys0 -device nvme,id=nvme0,bus=rp0,serial=deadbeef,subsys=subsys0,sriov_max_vfs=1,sriov_vq_flexible=2,sriov_vi_flexible=1 Guest OS: --------- sudo nvme virt-mgmt /dev/nvme0 -c 0 -r 1 -a 1 -n 0 sudo nvme virt-mgmt /dev/nvme0 -c 0 -r 0 -a 1 -n 0 echo 1 > /sys/bus/pci/devices/0000:01:00.0/reset sleep 1 echo 1 > /sys/bus/pci/devices/0000:01:00.0/sriov_numvfs nvme virt-mgmt /dev/nvme0 -c 1 -r 1 -a 8 -n 1 nvme virt-mgmt /dev/nvme0 -c 1 -r 0 -a 8 -n 2 nvme virt-mgmt /dev/nvme0 -c 1 -r 0 -a 9 -n 0 sleep 2 echo 01:00.1 > /sys/bus/pci/drivers/nvme/bind Qemu monitor: ------------- device_del nvme0 Explanation of the problem and the proposed solution: 1) The current SR-IOV implementation assumes it’s the PhysicalFunction that creates and deletes VirtualFunctions. 2) It’s a design decision (the Nvme device at least) for the VFs to be of the same class as PF. Effectively, they share the dc->hotpluggable value. 3) When a VF is created, it’s added as a child node to PF’s PCI bus slot. 4) Monitor/device_del triggers the ACPI mechanism. The implementation is not aware of SR/IOV and ejects PF’s PCI slot, directly unrealizing all hot-pluggable (!acpi_pcihp_pc_no_hotplug) children nodes. 5) VFs are unrealized directly, and it doesn’t work well with (1). SR/IOV structures are not updated, so when it’s PF’s turn to be unrealized, it works on stale pointers to already-deleted VFs. The proposed fix is to make the PCI ACPI code aware of SR/IOV. Signed-off-by: Łukasz Gieryk Acked-by: Michael S. Tsirkin Reviewed-by: Michael S. Tsirkin Signed-off-by: Klaus Jensen --- hw/acpi/pcihp.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/hw/acpi/pcihp.c b/hw/acpi/pcihp.c index bf65bbea4940..84d75e6b846f 100644 --- a/hw/acpi/pcihp.c +++ b/hw/acpi/pcihp.c @@ -192,8 +192,12 @@ static bool acpi_pcihp_pc_no_hotplug(AcpiPciHpState *s, PCIDevice *dev) * ACPI doesn't allow hotplug of bridge devices. Don't allow * hot-unplug of bridge devices unless they were added by hotplug * (and so, not described by acpi). + * + * Don't allow hot-unplug of SR-IOV Virtual Functions, as they + * will be removed implicitly, when Physical Function is unplugged. */ - return (pc->is_bridge && !dev->qdev.hotplugged) || !dc->hotpluggable; + return (pc->is_bridge && !dev->qdev.hotplugged) || !dc->hotpluggable || + pci_is_vf(dev); } static void acpi_pcihp_eject_slot(AcpiPciHpState *s, unsigned bsel, unsigned slots) From patchwork Thu Jun 23 21:34:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 12893223 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3F8E0C43334 for ; Thu, 23 Jun 2022 21:57:04 +0000 (UTC) Received: from localhost ([::1]:46316 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1o4Uox-0003gz-Cg for qemu-devel@archiver.kernel.org; Thu, 23 Jun 2022 17:57:03 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55238) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UUT-00057d-1t; Thu, 23 Jun 2022 17:35:53 -0400 Received: from wout5-smtp.messagingengine.com ([64.147.123.21]:37755) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UUR-0006hn-BY; Thu, 23 Jun 2022 17:35:52 -0400 Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.west.internal (Postfix) with ESMTP id 6D5F3320098D; Thu, 23 Jun 2022 17:35:48 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute5.internal (MEProxy); Thu, 23 Jun 2022 17:35:49 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=cc:cc:content-transfer-encoding:content-type:date:date:from :from:in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm1; t=1656020147; x= 1656106547; bh=eEvKSMaTLKY9dfsde1sYRfcZsvX+6J+8FjefYyFN6X8=; b=D QVA70tmLtm7alZ4U+9WXVQ9o0XTyt29uSTyHl/fFyCtoxJ7fcmfueyen/kE6SP15 t5OL1CjWrjHkJ54vQQuw4fbY30bwtxpMNKP3Ww2Z/9dIh063XFx8qKaevb13u8aF +hnBWe7TRif4y/AY1Wk1HocBk6jW577ZB9ZcAEiokccFZIDdAozFpGh3wMCSEEOg SdkQR3Ka9iVOtv1AogY+ZhU/VDjHpvY27ZED9fCG1n7DDzinTfEYPEWFwZucpzqp wHwvWCD1RmMO9kmbKmuJs3sjmi5wiGzrI5V/8NqtyCmc0fqkSMN/riCVR/qvYyNP +k44MubE36XfpL0JKW27Q== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1656020147; x= 1656106547; bh=eEvKSMaTLKY9dfsde1sYRfcZsvX+6J+8FjefYyFN6X8=; b=T aVyhZN9qiLwagdn4K81cMQx/yZ38ZhHBkPp+O4/bYRoetWZnCo4J2XFjt9UcoTQa ZRry26iqgdIs7qQ7i9VQdnbKIVNTRPUIjYXYHSJw3ZVpmZiORLaMtPT1L/DmLJNi 5FhNdp0fUkqDi8if3uf3/7tH9I1kkxmoYoiM/F3a5h6vjsxXizyqtSiPuKs6tEt0 vBo6tRUPGzVeDeDB8q3DmpnY7JUl/CkrzJR2rbTDqi6nfUTtlP0YFn36pTGlP5Pa R/JjxgXKkzFJYEKRaAT1RCatdw5OOiOKxoPp+tQHBS5hmMm1/G8Sj134G59QU0WB G/0n6lSywU5GBBIbPdL9g== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudefjedgudeifecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfgggtgfesthekredtredtjeenucfhrhhomhepmfhl rghushculfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrf grthhtvghrnhepfeevtdeuteeuudffvefggfdtfedtueelfffhieegffekgeefjeefffet jeeihfdvnecuvehluhhsthgvrhfuihiivgepgeenucfrrghrrghmpehmrghilhhfrhhomh epihhtshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: Feedback-ID: idc91472f:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 23 Jun 2022 17:35:45 -0400 (EDT) From: Klaus Jensen To: Peter Maydell , qemu-devel@nongnu.org Cc: Stefan Hajnoczi , Igor Mammedov , Ani Sinha , Hanna Reitz , Kevin Wolf , "Michael S. Tsirkin" , Klaus Jensen , qemu-block@nongnu.org, Keith Busch , Fam Zheng , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Marcel Apfelbaum , Klaus Jensen , =?utf-8?q?=C5=81ukasz_Gieryk?= , Lukasz Maniak Subject: [PULL 13/15] hw/nvme: clean up CC register write logic Date: Thu, 23 Jun 2022 23:34:40 +0200 Message-Id: <20220623213442.67789-14-its@irrelevant.dk> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220623213442.67789-1-its@irrelevant.dk> References: <20220623213442.67789-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.21; envelope-from=its@irrelevant.dk; helo=wout5-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen The SRIOV series exposed an issued with how CC register writes are handled and how CSTS is set in response to that. Specifically, after applying the SRIOV series, the controller could end up in a state with CC.EN set to '1' but with CSTS.RDY cleared to '0', causing drivers to expect CSTS.RDY to transition to '1' but timing out. Clean this up. Reviewed-by: Łukasz Gieryk Reviewed-by: Lukasz Maniak Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 38 ++++++++++++++++---------------------- 1 file changed, 16 insertions(+), 22 deletions(-) diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index 658584d417fe..a558f5cb29c1 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -6190,10 +6190,15 @@ static void nvme_ctrl_reset(NvmeCtrl *n, NvmeResetType rst) if (pci_is_vf(pci_dev)) { sctrl = nvme_sctrl(n); + stl_le_p(&n->bar.csts, sctrl->scs ? 0 : NVME_CSTS_FAILED); } else { stl_le_p(&n->bar.csts, 0); } + + stl_le_p(&n->bar.intms, 0); + stl_le_p(&n->bar.intmc, 0); + stl_le_p(&n->bar.cc, 0); } static void nvme_ctrl_shutdown(NvmeCtrl *n) @@ -6405,20 +6410,21 @@ static void nvme_write_bar(NvmeCtrl *n, hwaddr offset, uint64_t data, nvme_irq_check(n); break; case NVME_REG_CC: + stl_le_p(&n->bar.cc, data); + trace_pci_nvme_mmio_cfg(data & 0xffffffff); - /* Windows first sends data, then sends enable bit */ - if (!NVME_CC_EN(data) && !NVME_CC_EN(cc) && - !NVME_CC_SHN(data) && !NVME_CC_SHN(cc)) - { - cc = data; + if (NVME_CC_SHN(data) && !(NVME_CC_SHN(cc))) { + trace_pci_nvme_mmio_shutdown_set(); + nvme_ctrl_shutdown(n); + csts &= ~(CSTS_SHST_MASK << CSTS_SHST_SHIFT); + csts |= NVME_CSTS_SHST_COMPLETE; + } else if (!NVME_CC_SHN(data) && NVME_CC_SHN(cc)) { + trace_pci_nvme_mmio_shutdown_cleared(); + csts &= ~(CSTS_SHST_MASK << CSTS_SHST_SHIFT); } if (NVME_CC_EN(data) && !NVME_CC_EN(cc)) { - cc = data; - - /* flush CC since nvme_start_ctrl() needs the value */ - stl_le_p(&n->bar.cc, cc); if (unlikely(nvme_start_ctrl(n))) { trace_pci_nvme_err_startfail(); csts = NVME_CSTS_FAILED; @@ -6429,22 +6435,10 @@ static void nvme_write_bar(NvmeCtrl *n, hwaddr offset, uint64_t data, } else if (!NVME_CC_EN(data) && NVME_CC_EN(cc)) { trace_pci_nvme_mmio_stopped(); nvme_ctrl_reset(n, NVME_RESET_CONTROLLER); - cc = 0; - csts &= ~NVME_CSTS_READY; - } - if (NVME_CC_SHN(data) && !(NVME_CC_SHN(cc))) { - trace_pci_nvme_mmio_shutdown_set(); - nvme_ctrl_shutdown(n); - cc = data; - csts |= NVME_CSTS_SHST_COMPLETE; - } else if (!NVME_CC_SHN(data) && NVME_CC_SHN(cc)) { - trace_pci_nvme_mmio_shutdown_cleared(); - csts &= ~NVME_CSTS_SHST_COMPLETE; - cc = data; + break; } - stl_le_p(&n->bar.cc, cc); stl_le_p(&n->bar.csts, csts); break; From patchwork Thu Jun 23 21:34:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 12893224 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C7386CCA47E for ; Thu, 23 Jun 2022 21:59:56 +0000 (UTC) Received: from localhost ([::1]:53568 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1o4Urj-0000Cs-QC for qemu-devel@archiver.kernel.org; Thu, 23 Jun 2022 17:59:55 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55272) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UUY-0005CD-6J; Thu, 23 Jun 2022 17:35:59 -0400 Received: from wout5-smtp.messagingengine.com ([64.147.123.21]:53561) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UUW-0006i3-CE; Thu, 23 Jun 2022 17:35:57 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.west.internal (Postfix) with ESMTP id 9368D320094C; Thu, 23 Jun 2022 17:35:53 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Thu, 23 Jun 2022 17:35:54 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=cc:cc:content-transfer-encoding:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm1; t=1656020153; x= 1656106553; bh=CV5ims5Ilnsvzmzz5PZpmfcmMDgX/DT20i9wIjCax3A=; b=I qnaKSZ2LKbQg4aJZgjlma/LxnuVmoFhnNDOFFcKp4Jto9pRljAUQm5OsaGk6uEFk k3wnwdtATJTes4iyo62wzq9oQD8SXW/MJ7BwZiUf0/mgoTc0lhpn9fdGlLvX4A9x Fya0S4Uxb7mtd8NsXCDaObrydS1aGv6Yv0cCbAnuRkyW0Nf8bPMFOcWw4xePeqCN tXwPLvawaTezlROl5e96g6Y4rj2elWdwJdbdB6QpokhsQS25DT6AL3tDtM1ok0XR zhkkbVptEcfMEgqLeNG/qLtLIaRJZJt4fV04ZQcSvpSyz3+TRzQFKRKR5KvNHGaz Jm6k3GYEMZBYZMnuKhKiQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:sender:subject :subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; t=1656020153; x=1656106553; bh=CV5ims5Ilnsvz mzz5PZpmfcmMDgX/DT20i9wIjCax3A=; b=H3k+rEcDLmaz433lrECKRgfxK7lr+ gTL/DgPyLYsYgNWVJdafFh8B+MnFDsSxlGDXiHDmskD/Ni1+udgXRTphIShIImSW BpIJKqYKkXUQ2Tg/2sQO1vCcI/GCP5FuZOZ8Ag2AsJ0xnLrPoN5yb595EJw1hEa4 p5wC8CKPN2dOrAFIeEPWZnTU9KbqM5EmDIvb73b4OG97KuLPuIXmTUoSVgl0X3dH AUZL2lc0Rgn8ip7xPCScayusFnt3b2zCly8ajiv2lZsSx61DLQfPNJi5D4iy9pPa Ca3IxteIxkaDdKhd5QVj/hDQk+Lf+fgeSV+bvfFkJqtx6Ed7dzkUjvrPg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudefjedgudeifecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpefmlhgr uhhsucflvghnshgvnhcuoehithhssehirhhrvghlvghvrghnthdrughkqeenucggtffrrg htthgvrhhnpeejgfeilefgieevheekueevheehkeefveegiefgheefgfejjeehffefgedu jedugeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpe hithhssehirhhrvghlvghvrghnthdrughk X-ME-Proxy: Feedback-ID: idc91472f:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 23 Jun 2022 17:35:50 -0400 (EDT) From: Klaus Jensen To: Peter Maydell , qemu-devel@nongnu.org Cc: Stefan Hajnoczi , Igor Mammedov , Ani Sinha , Hanna Reitz , Kevin Wolf , "Michael S. Tsirkin" , Klaus Jensen , qemu-block@nongnu.org, Keith Busch , Fam Zheng , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Marcel Apfelbaum , Klaus Jensen Subject: [PULL 14/15] Revert "hw/block/nvme: add support for sgl bit bucket descriptor" Date: Thu, 23 Jun 2022 23:34:41 +0200 Message-Id: <20220623213442.67789-15-its@irrelevant.dk> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220623213442.67789-1-its@irrelevant.dk> References: <20220623213442.67789-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.21; envelope-from=its@irrelevant.dk; helo=wout5-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen This reverts commit d97eee64fef35655bd06f5c44a07fdb83a6274ae. The emulated controller correctly accounts for not including bit buckets in the controller-to-host data transfer, however it doesn't correctly account for the holes for the on-disk data offsets. Reported-by: Keith Busch Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 29 ++++++----------------------- 1 file changed, 6 insertions(+), 23 deletions(-) diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index a558f5cb29c1..15d580a904ef 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -850,10 +850,6 @@ static uint16_t nvme_map_sgl_data(NvmeCtrl *n, NvmeSg *sg, uint8_t type = NVME_SGL_TYPE(segment[i].type); switch (type) { - case NVME_SGL_DESCR_TYPE_BIT_BUCKET: - if (cmd->opcode == NVME_CMD_WRITE) { - continue; - } case NVME_SGL_DESCR_TYPE_DATA_BLOCK: break; case NVME_SGL_DESCR_TYPE_SEGMENT: @@ -886,10 +882,6 @@ static uint16_t nvme_map_sgl_data(NvmeCtrl *n, NvmeSg *sg, trans_len = MIN(*len, dlen); - if (type == NVME_SGL_DESCR_TYPE_BIT_BUCKET) { - goto next; - } - addr = le64_to_cpu(segment[i].addr); if (UINT64_MAX - addr < dlen) { @@ -901,7 +893,6 @@ static uint16_t nvme_map_sgl_data(NvmeCtrl *n, NvmeSg *sg, return status; } -next: *len -= trans_len; } @@ -959,8 +950,7 @@ static uint16_t nvme_map_sgl(NvmeCtrl *n, NvmeSg *sg, NvmeSglDescriptor sgl, seg_len = le32_to_cpu(sgld->len); /* check the length of the (Last) Segment descriptor */ - if ((!seg_len || seg_len & 0xf) && - (NVME_SGL_TYPE(sgld->type) != NVME_SGL_DESCR_TYPE_BIT_BUCKET)) { + if (!seg_len || seg_len & 0xf) { return NVME_INVALID_SGL_SEG_DESCR | NVME_DNR; } @@ -998,26 +988,20 @@ static uint16_t nvme_map_sgl(NvmeCtrl *n, NvmeSg *sg, NvmeSglDescriptor sgl, last_sgld = &segment[nsgld - 1]; /* - * If the segment ends with a Data Block or Bit Bucket Descriptor Type, - * then we are done. + * If the segment ends with a Data Block, then we are done. */ - switch (NVME_SGL_TYPE(last_sgld->type)) { - case NVME_SGL_DESCR_TYPE_DATA_BLOCK: - case NVME_SGL_DESCR_TYPE_BIT_BUCKET: + if (NVME_SGL_TYPE(last_sgld->type) == NVME_SGL_DESCR_TYPE_DATA_BLOCK) { status = nvme_map_sgl_data(n, sg, segment, nsgld, &len, cmd); if (status) { goto unmap; } goto out; - - default: - break; } /* - * If the last descriptor was not a Data Block or Bit Bucket, then the - * current segment must not be a Last Segment. + * If the last descriptor was not a Data Block, then the current + * segment must not be a Last Segment. */ if (NVME_SGL_TYPE(sgld->type) == NVME_SGL_DESCR_TYPE_LAST_SEGMENT) { status = NVME_INVALID_SGL_SEG_DESCR | NVME_DNR; @@ -7286,8 +7270,7 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pci_dev) id->vwc = NVME_VWC_NSID_BROADCAST_SUPPORT | NVME_VWC_PRESENT; id->ocfs = cpu_to_le16(NVME_OCFS_COPY_FORMAT_0 | NVME_OCFS_COPY_FORMAT_1); - id->sgls = cpu_to_le32(NVME_CTRL_SGLS_SUPPORT_NO_ALIGN | - NVME_CTRL_SGLS_BITBUCKET); + id->sgls = cpu_to_le32(NVME_CTRL_SGLS_SUPPORT_NO_ALIGN); nvme_init_subnqn(n); From patchwork Thu Jun 23 21:34:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 12893222 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B685FC43334 for ; Thu, 23 Jun 2022 21:55:40 +0000 (UTC) Received: from localhost ([::1]:41608 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1o4Unb-0000PX-OJ for qemu-devel@archiver.kernel.org; Thu, 23 Jun 2022 17:55:39 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55304) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UUc-0005JT-FL; Thu, 23 Jun 2022 17:36:02 -0400 Received: from wout5-smtp.messagingengine.com ([64.147.123.21]:35727) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o4UUa-0006ja-Py; Thu, 23 Jun 2022 17:36:02 -0400 Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailout.west.internal (Postfix) with ESMTP id 10D41320094C; Thu, 23 Jun 2022 17:35:57 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Thu, 23 Jun 2022 17:35:59 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=cc:cc:content-transfer-encoding:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm1; t=1656020157; x= 1656106557; bh=aQ5V7wENUGjIE2zkNYqp5gvM7R0B0zwTmp04q6h4IT8=; b=g QvGkFYalHu4GfpnD5ROxO6DVfK2bcpEgdzzqHz2BsBoIRBaiFrdgO4Xdyzc8WnPg 1ySxR/NH0d192hnZcpX1thjmweZB7R18a07lkpnxz4An6HNMHubhOroNdnYjR8bL mP6/3bHOnKc0W4/tIzHB2gg+KapOuqOc87WDU/DCddG8uYaiS4sQRdZQw+AEvjb2 d61TjVukjVV/KnLV1mBfevRaMjTVBHxDl8Sekv4N4vzZaduAvDXMHN6+ITxRd7wl QtFrR8KuKwviGzK0tAw+82rcAFVjYyUyHtg7Nz5ZzlNFF9EA3ko8gNeXkTzSuff+ H7xSpK+qGb0o8JWzZQW0A== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:sender:subject :subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; t=1656020157; x=1656106557; bh=aQ5V7wENUGjIE 2zkNYqp5gvM7R0B0zwTmp04q6h4IT8=; b=sycLCxk4wcUGjAeks/b4lH+9akgqm 9sO+FdPSlHbmNElVxBvd7YK5J4XWr87zbG/flXnorC/sehaSHdvqZK6V7k4E4eNO huq4yxRrjEIVS/wVKkilpFJT6BjvyUFmN4cOaTMS482HhxmvQdtOY+GenCYFZWZg bbhyBW1yQCe7N6QTIKi4iZHf678QrDWaZGn91YUgtD+US+jvEaQI3670BtlrE46D GaMONfk35hq8fsjY+mO/S1za+vCfpw0h4P0hSQmPgnZd+a1m3pUbYayZL/NbJCzk zDkxN+unwjDdw7dvg5NB1gMg8x7BOyg+i2MYVJ8cQ1U4XP/f2TifhcxfQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudefjedgudeigecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpefmlhgr uhhsucflvghnshgvnhcuoehithhssehirhhrvghlvghvrghnthdrughkqeenucggtffrrg htthgvrhhnpeejgfeilefgieevheekueevheehkeefveegiefgheefgfejjeehffefgedu jedugeenucevlhhushhtvghrufhiiigvpedunecurfgrrhgrmhepmhgrihhlfhhrohhmpe hithhssehirhhrvghlvghvrghnthdrughk X-ME-Proxy: Feedback-ID: idc91472f:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 23 Jun 2022 17:35:55 -0400 (EDT) From: Klaus Jensen To: Peter Maydell , qemu-devel@nongnu.org Cc: Stefan Hajnoczi , Igor Mammedov , Ani Sinha , Hanna Reitz , Kevin Wolf , "Michael S. Tsirkin" , Klaus Jensen , qemu-block@nongnu.org, Keith Busch , Fam Zheng , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Marcel Apfelbaum , Klaus Jensen Subject: [PULL 15/15] hw/nvme: clear aen mask on reset Date: Thu, 23 Jun 2022 23:34:42 +0200 Message-Id: <20220623213442.67789-16-its@irrelevant.dk> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220623213442.67789-1-its@irrelevant.dk> References: <20220623213442.67789-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.21; envelope-from=its@irrelevant.dk; helo=wout5-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen The internally maintained AEN mask is not cleared on reset. Fix this. Reviewed-by: Keith Busch Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 1 + 1 file changed, 1 insertion(+) diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index 15d580a904ef..d349b3e42620 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -6167,6 +6167,7 @@ static void nvme_ctrl_reset(NvmeCtrl *n, NvmeResetType rst) } n->aer_queued = 0; + n->aer_mask = 0; n->outstanding_aers = 0; n->qs_created = false;