From patchwork Thu Sep 2 19:50:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12472405 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7886BC433F5 for ; Thu, 2 Sep 2021 19:50:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5595860F92 for ; Thu, 2 Sep 2021 19:50:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347257AbhIBTv1 (ORCPT ); Thu, 2 Sep 2021 15:51:27 -0400 Received: from mga12.intel.com ([192.55.52.136]:41966 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231440AbhIBTvW (ORCPT ); Thu, 2 Sep 2021 15:51:22 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10095"; a="198778192" X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="198778192" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:23 -0700 X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="533451598" Received: from kappusam-mobl.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.143.117]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:22 -0700 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 01/13] Documentation/cxl: Add bus internal docs Date: Thu, 2 Sep 2021 12:50:05 -0700 Message-Id: <20210902195017.2516472-2-ben.widawsky@intel.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210902195017.2516472-1-ben.widawsky@intel.com> References: <20210902195017.2516472-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org Kernel docs are already present in this file, but nothing is instructed to generate them. Address that. Signed-off-by: Ben Widawsky Acked-by: Jonathan Cameron --- Documentation/driver-api/cxl/memory-devices.rst | 3 +++ 1 file changed, 3 insertions(+) diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst index 356f70d28316..a18175bae7a6 100644 --- a/Documentation/driver-api/cxl/memory-devices.rst +++ b/Documentation/driver-api/cxl/memory-devices.rst @@ -39,6 +39,9 @@ CXL Core .. kernel-doc:: drivers/cxl/core/bus.c :doc: cxl core +.. kernel-doc:: drivers/cxl/core/bus.c + :identifiers: + .. kernel-doc:: drivers/cxl/core/pmem.c :internal: From patchwork Thu Sep 2 19:50:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12472411 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E58C9C4332F for ; Thu, 2 Sep 2021 19:50:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BC7B360FDC for ; Thu, 2 Sep 2021 19:50:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231467AbhIBTva (ORCPT ); Thu, 2 Sep 2021 15:51:30 -0400 Received: from mga12.intel.com ([192.55.52.136]:41967 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231476AbhIBTvY (ORCPT ); Thu, 2 Sep 2021 15:51:24 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10095"; a="198778194" X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="198778194" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:24 -0700 X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="533451605" Received: from kappusam-mobl.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.143.117]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:23 -0700 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 02/13] cxl/core/bus: Add kernel docs for decoder ops Date: Thu, 2 Sep 2021 12:50:06 -0700 Message-Id: <20210902195017.2516472-3-ben.widawsky@intel.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210902195017.2516472-1-ben.widawsky@intel.com> References: <20210902195017.2516472-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org Since the code to add decoders for switches and endpoints is on the horizon, document the new interfaces that will be consumed by them. Signed-off-by: Ben Widawsky --- drivers/cxl/core/bus.c | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c index 3991ac231c3e..9d98dd50d424 100644 --- a/drivers/cxl/core/bus.c +++ b/drivers/cxl/core/bus.c @@ -453,6 +453,19 @@ int cxl_add_dport(struct cxl_port *port, struct device *dport_dev, int port_id, } EXPORT_SYMBOL_GPL(cxl_add_dport); +/** + * cxl_decoder_alloc - Allocate a new CXL decoder + * @port: owning port of this decoder + * @nr_targets: downstream targets accessible by this decoder + * + * A port should contain one or more decoders. Each of those decoders enable + * some address space for CXL.mem utilization. Therefore, it is logical to + * allocate decoders while enumerating a port. While >= 1 is defined by the CXL + * specification, due to error conditions it is possible that a port may have 0 + * decoders. + * + * Return: A new cxl decoder which wants to be added with cxl_decoder_add() + */ struct cxl_decoder *cxl_decoder_alloc(struct cxl_port *port, int nr_targets) { struct cxl_decoder *cxld; @@ -491,6 +504,21 @@ struct cxl_decoder *cxl_decoder_alloc(struct cxl_port *port, int nr_targets) } EXPORT_SYMBOL_GPL(cxl_decoder_alloc); +/** + * cxl_decoder_add - Add a decoder with targets + * @host: The containing struct device. This is typically the PCI device that is + * CXL capable + * @cxld: The cxl decoder allocated by cxl_decoder_alloc() + * @target_map: A list of downstream ports that this decoder can direct memory + * traffic to. These numbers should correspond with the port number + * in the PCIe Link Capabilities structure. + * + * Return: 0 if decoder was successfully added. + * + * Certain types of decoders may not have any targets. The main example of this + * is an endpoint device. A more awkward example is a hostbridge whose root + * ports get hot added (technically possible, though unlikely). + */ int cxl_decoder_add(struct device *host, struct cxl_decoder *cxld, int *target_map) { From patchwork Thu Sep 2 19:50:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12472409 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22A8EC43217 for ; Thu, 2 Sep 2021 19:50:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F085861041 for ; Thu, 2 Sep 2021 19:50:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347406AbhIBTvb (ORCPT ); Thu, 2 Sep 2021 15:51:31 -0400 Received: from mga12.intel.com ([192.55.52.136]:41971 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241385AbhIBTvZ (ORCPT ); Thu, 2 Sep 2021 15:51:25 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10095"; a="198778197" X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="198778197" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:24 -0700 X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="533451612" Received: from kappusam-mobl.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.143.117]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:23 -0700 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 03/13] cxl/core: Ignore interleave when adding decoders Date: Thu, 2 Sep 2021 12:50:07 -0700 Message-Id: <20210902195017.2516472-4-ben.widawsky@intel.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210902195017.2516472-1-ben.widawsky@intel.com> References: <20210902195017.2516472-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org Decoders will be added to the bus either already active (committed in spec parlance), or inactive. From the driver perspective, the set of devices comprising the former are those which are brought up by system firmware; decoders that implement: volatile regions, persistent regions, or platform specific (ie. CFMWS) constraints. Such devices have a given interleave programming already in place. Inactive decoders on the other hand, do not have any interleave programming in place. The set of devices comprising that are hostbridges, switches, and endpoint devices. Allow adding inactive decoders by removing this check. Signed-off-by: Ben Widawsky Reviewed-by: Jonathan Cameron --- drivers/cxl/core/bus.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c index 9d98dd50d424..8d5061b0794d 100644 --- a/drivers/cxl/core/bus.c +++ b/drivers/cxl/core/bus.c @@ -532,9 +532,6 @@ int cxl_decoder_add(struct device *host, struct cxl_decoder *cxld, if (IS_ERR(cxld)) return PTR_ERR(cxld); - if (cxld->interleave_ways < 1) - return -EINVAL; - port = to_cxl_port(cxld->dev.parent); device_lock(&port->dev); if (list_empty(&port->dports)) { From patchwork Thu Sep 2 19:50:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12472413 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49781C433FE for ; Thu, 2 Sep 2021 19:50:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2CF1161041 for ; Thu, 2 Sep 2021 19:50:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241385AbhIBTvb (ORCPT ); Thu, 2 Sep 2021 15:51:31 -0400 Received: from mga12.intel.com ([192.55.52.136]:41966 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347360AbhIBTv1 (ORCPT ); Thu, 2 Sep 2021 15:51:27 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10095"; a="198778201" X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="198778201" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:24 -0700 X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="533451616" Received: from kappusam-mobl.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.143.117]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:24 -0700 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 04/13] cxl: Introduce endpoint decoders Date: Thu, 2 Sep 2021 12:50:08 -0700 Message-Id: <20210902195017.2516472-5-ben.widawsky@intel.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210902195017.2516472-1-ben.widawsky@intel.com> References: <20210902195017.2516472-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org Endpoints have decoders too. It is useful to share the same infrastructure from cxl_core. Endpoints do not have dports (downstream targets), only the underlying physical medium. As a result, some special casing is needed. There is no functional change introduced yet as endpoints don't actually enumerate decoders yet. Signed-off-by: Ben Widawsky --- drivers/cxl/core/bus.c | 29 +++++++++++++++++++++++++---- 1 file changed, 25 insertions(+), 4 deletions(-) diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c index 8d5061b0794d..6202ce5a5ac2 100644 --- a/drivers/cxl/core/bus.c +++ b/drivers/cxl/core/bus.c @@ -175,6 +175,12 @@ static const struct attribute_group *cxl_decoder_switch_attribute_groups[] = { NULL, }; +static const struct attribute_group *cxl_decoder_endpoint_attribute_groups[] = { + &cxl_decoder_base_attribute_group, + &cxl_base_attribute_group, + NULL, +}; + static void cxl_decoder_release(struct device *dev) { struct cxl_decoder *cxld = to_cxl_decoder(dev); @@ -184,6 +190,12 @@ static void cxl_decoder_release(struct device *dev) kfree(cxld); } +static const struct device_type cxl_decoder_endpoint_type = { + .name = "cxl_decoder_endpoint", + .release = cxl_decoder_release, + .groups = cxl_decoder_endpoint_attribute_groups, +}; + static const struct device_type cxl_decoder_switch_type = { .name = "cxl_decoder_switch", .release = cxl_decoder_release, @@ -196,6 +208,11 @@ static const struct device_type cxl_decoder_root_type = { .groups = cxl_decoder_root_attribute_groups, }; +static bool is_endpoint_decoder(struct device *dev) +{ + return dev->type == &cxl_decoder_endpoint_type; +} + bool is_root_decoder(struct device *dev) { return dev->type == &cxl_decoder_root_type; @@ -472,7 +489,7 @@ struct cxl_decoder *cxl_decoder_alloc(struct cxl_port *port, int nr_targets) struct device *dev; int rc = 0; - if (nr_targets > CXL_DECODER_MAX_INTERLEAVE || nr_targets < 1) + if (nr_targets > CXL_DECODER_MAX_INTERLEAVE) return ERR_PTR(-EINVAL); cxld = kzalloc(struct_size(cxld, target, nr_targets), GFP_KERNEL); @@ -491,8 +508,11 @@ struct cxl_decoder *cxl_decoder_alloc(struct cxl_port *port, int nr_targets) dev->parent = &port->dev; dev->bus = &cxl_bus_type; + /* Endpoints don't have a target list */ + if (nr_targets == 0) + dev->type = &cxl_decoder_endpoint_type; /* root ports do not have a cxl_port_type parent */ - if (port->dev.parent->type == &cxl_port_type) + else if (port->dev.parent->type == &cxl_port_type) dev->type = &cxl_decoder_switch_type; else dev->type = &cxl_decoder_root_type; @@ -532,9 +552,11 @@ int cxl_decoder_add(struct device *host, struct cxl_decoder *cxld, if (IS_ERR(cxld)) return PTR_ERR(cxld); + dev = &cxld->dev; + port = to_cxl_port(cxld->dev.parent); device_lock(&port->dev); - if (list_empty(&port->dports)) { + if (is_endpoint_decoder(dev) && list_empty(&port->dports)) { rc = -EINVAL; goto out_unlock; } @@ -551,7 +573,6 @@ int cxl_decoder_add(struct device *host, struct cxl_decoder *cxld, } device_unlock(&port->dev); - dev = &cxld->dev; rc = dev_set_name(dev, "decoder%d.%d", port->id, cxld->id); if (rc) return rc; From patchwork Thu Sep 2 19:50:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12472415 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B0DFC4321E for ; Thu, 2 Sep 2021 19:50:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 539C560FDC for ; Thu, 2 Sep 2021 19:50:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347360AbhIBTvb (ORCPT ); Thu, 2 Sep 2021 15:51:31 -0400 Received: from mga12.intel.com ([192.55.52.136]:41967 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347371AbhIBTv2 (ORCPT ); Thu, 2 Sep 2021 15:51:28 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10095"; a="198778202" X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="198778202" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:25 -0700 X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="533451622" Received: from kappusam-mobl.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.143.117]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:25 -0700 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 05/13] cxl/pci: Disambiguate cxl_pci further from cxl_mem Date: Thu, 2 Sep 2021 12:50:09 -0700 Message-Id: <20210902195017.2516472-6-ben.widawsky@intel.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210902195017.2516472-1-ben.widawsky@intel.com> References: <20210902195017.2516472-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org Commit 21e9f76733a8 ("cxl: Rename mem to pci") introduced the cxl_pci driver which had formerly been named cxl_mem. At the time, the goal was to be as light touch as possible because there were other patches in flight. Since things have settled now, and a new cxl_mem driver will be introduced shortly, spend the LOC now to clean up the existing names. While here, fix the kernel docs to explain the situation better after the core rework that has already landed. Signed-off-by: Ben Widawsky Reviewed-by: Jonathan Cameron --- drivers/cxl/pci.c | 70 +++++++++++++++++++++++------------------------ 1 file changed, 35 insertions(+), 35 deletions(-) diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index b13884275d96..6931885c83ce 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -16,14 +16,14 @@ * * This implements the PCI exclusive functionality for a CXL device as it is * defined by the Compute Express Link specification. CXL devices may surface - * certain functionality even if it isn't CXL enabled. + * certain functionality even if it isn't CXL enabled. While this driver is + * focused around the PCI specific aspects of a CXL device, it binds to the + * specific CXL memory device class code, and therefore the implementation of + * cxl_pci is focused around CXL memory devices. * - * The driver has several responsibilities, mainly: + * The driver has two responsibilities: * - Create the memX device and register on the CXL bus. * - Enumerate device's register interface and map them. - * - Probe the device attributes to establish sysfs interface. - * - Provide an IOCTL interface to userspace to communicate with the device for - * things like firmware update. */ #define cxl_doorbell_busy(cxlm) \ @@ -33,7 +33,7 @@ /* CXL 2.0 - 8.2.8.4 */ #define CXL_MAILBOX_TIMEOUT_MS (2 * HZ) -static int cxl_mem_wait_for_doorbell(struct cxl_mem *cxlm) +static int cxl_pci_wait_for_doorbell(struct cxl_mem *cxlm) { const unsigned long start = jiffies; unsigned long end = start; @@ -55,7 +55,7 @@ static int cxl_mem_wait_for_doorbell(struct cxl_mem *cxlm) return 0; } -static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm, +static void cxl_pci_mbox_timeout(struct cxl_mem *cxlm, struct cxl_mbox_cmd *mbox_cmd) { struct device *dev = cxlm->dev; @@ -65,7 +65,7 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm, } /** - * __cxl_mem_mbox_send_cmd() - Execute a mailbox command + * __cxl_pci_mbox_send_cmd() - Execute a mailbox command * @cxlm: The CXL memory device to communicate with. * @mbox_cmd: Command to send to the memory device. * @@ -86,7 +86,7 @@ static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm, * not need to coordinate with each other. The driver only uses the primary * mailbox. */ -static int __cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, +static int __cxl_pci_mbox_send_cmd(struct cxl_mem *cxlm, struct cxl_mbox_cmd *mbox_cmd) { void __iomem *payload = cxlm->regs.mbox + CXLDEV_MBOX_PAYLOAD_OFFSET; @@ -140,9 +140,9 @@ static int __cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, cxlm->regs.mbox + CXLDEV_MBOX_CTRL_OFFSET); /* #5 */ - rc = cxl_mem_wait_for_doorbell(cxlm); + rc = cxl_pci_wait_for_doorbell(cxlm); if (rc == -ETIMEDOUT) { - cxl_mem_mbox_timeout(cxlm, mbox_cmd); + cxl_pci_mbox_timeout(cxlm, mbox_cmd); return rc; } @@ -181,13 +181,13 @@ static int __cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, } /** - * cxl_mem_mbox_get() - Acquire exclusive access to the mailbox. + * cxl_pci_mbox_get() - Acquire exclusive access to the mailbox. * @cxlm: The memory device to gain access to. * * Context: Any context. Takes the mbox_mutex. * Return: 0 if exclusive access was acquired. */ -static int cxl_mem_mbox_get(struct cxl_mem *cxlm) +static int cxl_pci_mbox_get(struct cxl_mem *cxlm) { struct device *dev = cxlm->dev; u64 md_status; @@ -212,7 +212,7 @@ static int cxl_mem_mbox_get(struct cxl_mem *cxlm) * Mailbox Interface Ready bit. Therefore, waiting for the doorbell * to be ready is sufficient. */ - rc = cxl_mem_wait_for_doorbell(cxlm); + rc = cxl_pci_wait_for_doorbell(cxlm); if (rc) { dev_warn(dev, "Mailbox interface not ready\n"); goto out; @@ -252,12 +252,12 @@ static int cxl_mem_mbox_get(struct cxl_mem *cxlm) } /** - * cxl_mem_mbox_put() - Release exclusive access to the mailbox. + * cxl_pci_mbox_put() - Release exclusive access to the mailbox. * @cxlm: The CXL memory device to communicate with. * * Context: Any context. Expects mbox_mutex to be held. */ -static void cxl_mem_mbox_put(struct cxl_mem *cxlm) +static void cxl_pci_mbox_put(struct cxl_mem *cxlm) { mutex_unlock(&cxlm->mbox_mutex); } @@ -266,17 +266,17 @@ static int cxl_pci_mbox_send(struct cxl_mem *cxlm, struct cxl_mbox_cmd *cmd) { int rc; - rc = cxl_mem_mbox_get(cxlm); + rc = cxl_pci_mbox_get(cxlm); if (rc) return rc; - rc = __cxl_mem_mbox_send_cmd(cxlm, cmd); - cxl_mem_mbox_put(cxlm); + rc = __cxl_pci_mbox_send_cmd(cxlm, cmd); + cxl_pci_mbox_put(cxlm); return rc; } -static int cxl_mem_setup_mailbox(struct cxl_mem *cxlm) +static int cxl_pci_setup_mailbox(struct cxl_mem *cxlm) { const int cap = readl(cxlm->regs.mbox + CXLDEV_MBOX_CAPS_OFFSET); @@ -304,7 +304,7 @@ static int cxl_mem_setup_mailbox(struct cxl_mem *cxlm) return 0; } -static void __iomem *cxl_mem_map_regblock(struct cxl_mem *cxlm, +static void __iomem *cxl_pci_map_regblock(struct cxl_mem *cxlm, u8 bar, u64 offset) { void __iomem *addr; @@ -330,12 +330,12 @@ static void __iomem *cxl_mem_map_regblock(struct cxl_mem *cxlm, return addr; } -static void cxl_mem_unmap_regblock(struct cxl_mem *cxlm, void __iomem *base) +static void cxl_pci_unmap_regblock(struct cxl_mem *cxlm, void __iomem *base) { pci_iounmap(to_pci_dev(cxlm->dev), base); } -static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec) +static int cxl_pci_dvsec(struct pci_dev *pdev, int dvsec) { int pos; @@ -428,7 +428,7 @@ static void cxl_decode_register_block(u32 reg_lo, u32 reg_hi, } /** - * cxl_mem_setup_regs() - Setup necessary MMIO. + * cxl_pci_setup_regs() - Setup necessary MMIO. * @cxlm: The CXL memory device to communicate with. * * Return: 0 if all necessary registers mapped. @@ -437,7 +437,7 @@ static void cxl_decode_register_block(u32 reg_lo, u32 reg_hi, * regions. The purpose of this function is to enumerate and map those * registers. */ -static int cxl_mem_setup_regs(struct cxl_mem *cxlm) +static int cxl_pci_setup_regs(struct cxl_mem *cxlm) { struct pci_dev *pdev = to_pci_dev(cxlm->dev); struct device *dev = cxlm->dev; @@ -447,7 +447,7 @@ static int cxl_mem_setup_regs(struct cxl_mem *cxlm) struct cxl_register_map *map, maps[CXL_REGLOC_RBI_TYPES]; int ret = 0; - regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC_DVSEC_ID); + regloc = cxl_pci_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC_DVSEC_ID); if (!regloc) { dev_err(dev, "register location dvsec not found\n"); return -ENXIO; @@ -482,7 +482,7 @@ static int cxl_mem_setup_regs(struct cxl_mem *cxlm) if (reg_type > CXL_REGLOC_RBI_MEMDEV) continue; - base = cxl_mem_map_regblock(cxlm, bar, offset); + base = cxl_pci_map_regblock(cxlm, bar, offset); if (!base) return -ENOMEM; @@ -494,7 +494,7 @@ static int cxl_mem_setup_regs(struct cxl_mem *cxlm) ret = cxl_probe_regs(cxlm, base + offset, map); /* Always unmap the regblock regardless of probe success */ - cxl_mem_unmap_regblock(cxlm, base); + cxl_pci_unmap_regblock(cxlm, base); if (ret) return ret; @@ -513,7 +513,7 @@ static int cxl_mem_setup_regs(struct cxl_mem *cxlm) return ret; } -static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id) +static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) { struct cxl_memdev *cxlmd; struct cxl_mem *cxlm; @@ -534,11 +534,11 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (IS_ERR(cxlm)) return PTR_ERR(cxlm); - rc = cxl_mem_setup_regs(cxlm); + rc = cxl_pci_setup_regs(cxlm); if (rc) return rc; - rc = cxl_mem_setup_mailbox(cxlm); + rc = cxl_pci_setup_mailbox(cxlm); if (rc) return rc; @@ -569,17 +569,17 @@ static const struct pci_device_id cxl_mem_pci_tbl[] = { { PCI_DEVICE_CLASS((PCI_CLASS_MEMORY_CXL << 8 | CXL_MEMORY_PROGIF), ~0)}, { /* terminate list */ }, }; -MODULE_DEVICE_TABLE(pci, cxl_mem_pci_tbl); +MODULE_DEVICE_TABLE(pci, cxl_pci_tbl); -static struct pci_driver cxl_mem_driver = { +static struct pci_driver cxl_pci_driver = { .name = KBUILD_MODNAME, .id_table = cxl_mem_pci_tbl, - .probe = cxl_mem_probe, + .probe = cxl_pci_probe, .driver = { .probe_type = PROBE_PREFER_ASYNCHRONOUS, }, }; MODULE_LICENSE("GPL v2"); -module_pci_driver(cxl_mem_driver); +module_pci_driver(cxl_pci_driver); MODULE_IMPORT_NS(CXL); From patchwork Thu Sep 2 19:50:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12472419 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE6C8C433F5 for ; Thu, 2 Sep 2021 19:50:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 94E5F61041 for ; Thu, 2 Sep 2021 19:50:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347371AbhIBTvb (ORCPT ); Thu, 2 Sep 2021 15:51:31 -0400 Received: from mga12.intel.com ([192.55.52.136]:41971 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347396AbhIBTv3 (ORCPT ); Thu, 2 Sep 2021 15:51:29 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10095"; a="198778203" X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="198778203" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:26 -0700 X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="533451626" Received: from kappusam-mobl.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.143.117]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:25 -0700 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 06/13] cxl/mem: Introduce cxl_mem driver Date: Thu, 2 Sep 2021 12:50:10 -0700 Message-Id: <20210902195017.2516472-7-ben.widawsky@intel.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210902195017.2516472-1-ben.widawsky@intel.com> References: <20210902195017.2516472-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org CXL endpoints that participate in the CXL.mem protocol require extra control to ensure architectural constraints are met for device management. The most straight-forward way to achieve control of these endpoints is with a new driver that can bind to such devices. This driver will also be responsible for enumerating the switches that connect the endpoint to the hostbridge. cxl_core already understands the concept of a memdev, but the core [by design] does not comprehend all the topological constraints. Signed-off-by: Ben Widawsky Reviewed-by: Jonathan Cameron --- .../driver-api/cxl/memory-devices.rst | 3 ++ drivers/cxl/Makefile | 3 +- drivers/cxl/core/bus.c | 2 + drivers/cxl/core/core.h | 1 + drivers/cxl/core/memdev.c | 2 +- drivers/cxl/cxl.h | 1 + drivers/cxl/mem.c | 49 +++++++++++++++++++ 7 files changed, 59 insertions(+), 2 deletions(-) create mode 100644 drivers/cxl/mem.c diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst index a18175bae7a6..00d141071570 100644 --- a/Documentation/driver-api/cxl/memory-devices.rst +++ b/Documentation/driver-api/cxl/memory-devices.rst @@ -28,6 +28,9 @@ CXL Memory Device .. kernel-doc:: drivers/cxl/pci.c :internal: +.. kernel-doc:: drivers/cxl/mem.c + :doc: cxl mem + CXL Core -------- .. kernel-doc:: drivers/cxl/cxl.h diff --git a/drivers/cxl/Makefile b/drivers/cxl/Makefile index d1aaabc940f3..d912ac4e3f0c 100644 --- a/drivers/cxl/Makefile +++ b/drivers/cxl/Makefile @@ -1,9 +1,10 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_CXL_BUS) += core/ -obj-$(CONFIG_CXL_MEM) += cxl_pci.o +obj-$(CONFIG_CXL_MEM) += cxl_mem.o cxl_pci.o obj-$(CONFIG_CXL_ACPI) += cxl_acpi.o obj-$(CONFIG_CXL_PMEM) += cxl_pmem.o +cxl_mem-y := mem.o cxl_pci-y := pci.o cxl_acpi-y := acpi.o cxl_pmem-y := pmem.o diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c index 6202ce5a5ac2..256e55dc2a3b 100644 --- a/drivers/cxl/core/bus.c +++ b/drivers/cxl/core/bus.c @@ -641,6 +641,8 @@ static int cxl_device_id(struct device *dev) return CXL_DEVICE_NVDIMM_BRIDGE; if (dev->type == &cxl_nvdimm_type) return CXL_DEVICE_NVDIMM; + if (dev->type == &cxl_memdev_type) + return CXL_DEVICE_ENDPOINT; return 0; } diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index e0c9aacc4e9c..dea246cb7c58 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -6,6 +6,7 @@ extern const struct device_type cxl_nvdimm_bridge_type; extern const struct device_type cxl_nvdimm_type; +extern const struct device_type cxl_memdev_type; extern struct attribute_group cxl_base_attribute_group; diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c index ee61202c7aab..c9dd054bd813 100644 --- a/drivers/cxl/core/memdev.c +++ b/drivers/cxl/core/memdev.c @@ -127,7 +127,7 @@ static const struct attribute_group *cxl_memdev_attribute_groups[] = { NULL, }; -static const struct device_type cxl_memdev_type = { +const struct device_type cxl_memdev_type = { .name = "cxl_memdev", .release = cxl_memdev_release, .devnode = cxl_memdev_devnode, diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 708bfe92b596..b48bdbefd949 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -315,6 +315,7 @@ void cxl_driver_unregister(struct cxl_driver *cxl_drv); #define CXL_DEVICE_NVDIMM_BRIDGE 1 #define CXL_DEVICE_NVDIMM 2 +#define CXL_DEVICE_ENDPOINT 3 #define MODULE_ALIAS_CXL(type) MODULE_ALIAS("cxl:t" __stringify(type) "*") #define CXL_MODALIAS_FMT "cxl:t%d" diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c new file mode 100644 index 000000000000..978a54b0a51a --- /dev/null +++ b/drivers/cxl/mem.c @@ -0,0 +1,49 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2021 Intel Corporation. All rights reserved. */ +#include +#include + +#include "cxlmem.h" + +/** + * DOC: cxl mem + * + * CXL memory endpoint devices and switches are CXL capable devices that are + * participating in CXL.mem protocol. Their functionality builds on top of the + * CXL.io protocol that allows enumerating and configuring components via + * standard PCI mechanisms. + * + * The cxl_mem driver implements enumeration and control over these CXL + * components. + */ + +static int cxl_mem_probe(struct device *dev) +{ + return -EOPNOTSUPP; +} + +static void cxl_mem_remove(struct device *dev) +{ +} + +static struct cxl_driver cxl_mem_driver = { + .name = "cxl_mem", + .probe = cxl_mem_probe, + .remove = cxl_mem_remove, + .id = CXL_DEVICE_ENDPOINT, +}; + +static __init int cxl_mem_init(void) +{ + return cxl_driver_register(&cxl_mem_driver); +} + +static __exit void cxl_mem_exit(void) +{ + cxl_driver_unregister(&cxl_mem_driver); +} + +MODULE_LICENSE("GPL v2"); +module_init(cxl_mem_init); +module_exit(cxl_mem_exit); +MODULE_IMPORT_NS(CXL); From patchwork Thu Sep 2 19:50:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12472423 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25E45C4332F for ; Thu, 2 Sep 2021 19:50:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0707B6102A for ; Thu, 2 Sep 2021 19:50:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347398AbhIBTvc (ORCPT ); Thu, 2 Sep 2021 15:51:32 -0400 Received: from mga12.intel.com ([192.55.52.136]:41967 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347397AbhIBTva (ORCPT ); Thu, 2 Sep 2021 15:51:30 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10095"; a="198778205" X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="198778205" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:26 -0700 X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="533451630" Received: from kappusam-mobl.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.143.117]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:26 -0700 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 07/13] cxl/memdev: Determine CXL.mem capability Date: Thu, 2 Sep 2021 12:50:11 -0700 Message-Id: <20210902195017.2516472-8-ben.widawsky@intel.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210902195017.2516472-1-ben.widawsky@intel.com> References: <20210902195017.2516472-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org If the "upstream" port of the endpoint is an enumerated downstream CXL port, and the device itself is CXL capable and enabled, the memdev driver can bind. This binding useful for region configuration/creation because it provides a clean way for the region code to determine if the memdev is actually CXL capable. A memdev/hostbridge probe race is solved with a full CXL bus rescan at the end of ACPI probing (see comment in code for details). Switch enumeration will be done as a follow-on patch. As a result, if a switch is in the topology the memdev driver will not bind to any devices. CXL.mem capability is checked lazily at the time a region is bound. This is in line with the other configuration parameters. Below is an example (mem0, and mem1) of CXL memdev devices that now exist on the bus. /sys/bus/cxl/devices/ ├── decoder0.0 -> ../../../devices/platform/ACPI0017:00/root0/decoder0.0 ├── mem0 -> ../../../devices/pci0000:34/0000:34:01.0/0000:36:00.0/mem0 ├── mem1 -> ../../../devices/pci0000:34/0000:34:00.0/0000:35:00.0/mem1 ├── pmem0 -> ../../../devices/pci0000:34/0000:34:01.0/0000:36:00.0/mem0/pmem0 ├── pmem1 -> ../../../devices/pci0000:34/0000:34:00.0/0000:35:00.0/mem1/pmem1 ├── port1 -> ../../../devices/platform/ACPI0017:00/root0/port1 └── root0 -> ../../../devices/platform/ACPI0017:00/root0 Signed-off-by: Ben Widawsky --- drivers/cxl/acpi.c | 27 +++++++----------- drivers/cxl/core/bus.c | 60 +++++++++++++++++++++++++++++++++++++++ drivers/cxl/core/memdev.c | 6 ++++ drivers/cxl/cxl.h | 2 ++ drivers/cxl/cxlmem.h | 2 ++ drivers/cxl/mem.c | 55 ++++++++++++++++++++++++++++++++++- drivers/cxl/pci.c | 23 --------------- drivers/cxl/pci.h | 7 ++++- 8 files changed, 141 insertions(+), 41 deletions(-) diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c index 7130beffc929..fd14094bdb3f 100644 --- a/drivers/cxl/acpi.c +++ b/drivers/cxl/acpi.c @@ -240,21 +240,6 @@ __mock int match_add_root_ports(struct pci_dev *pdev, void *data) return 0; } -static struct cxl_dport *find_dport_by_dev(struct cxl_port *port, struct device *dev) -{ - struct cxl_dport *dport; - - device_lock(&port->dev); - list_for_each_entry(dport, &port->dports, list) - if (dport->dport == dev) { - device_unlock(&port->dev); - return dport; - } - - device_unlock(&port->dev); - return NULL; -} - __mock struct acpi_device *to_cxl_host_bridge(struct device *host, struct device *dev) { @@ -459,9 +444,19 @@ static int cxl_acpi_probe(struct platform_device *pdev) if (rc) goto out; - if (IS_ENABLED(CONFIG_CXL_PMEM)) + if (IS_ENABLED(CONFIG_CXL_PMEM)) { rc = device_for_each_child(&root_port->dev, root_port, add_root_nvdimm_bridge); + if (rc) + goto out; + } + + /* + * While ACPI is scanning hostbridge ports, switches and memory devices + * may have been probed. Those devices will need to know whether the + * hostbridge is CXL capable. + */ + rc = bus_rescan_devices(&cxl_bus_type); out: acpi_put_table(acpi_cedt); diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c index 256e55dc2a3b..56f57302d27b 100644 --- a/drivers/cxl/core/bus.c +++ b/drivers/cxl/core/bus.c @@ -8,6 +8,7 @@ #include #include #include +#include #include "core.h" /** @@ -259,6 +260,12 @@ static const struct device_type cxl_port_type = { .groups = cxl_port_attribute_groups, }; +bool is_cxl_port(struct device *dev) +{ + return dev->type == &cxl_port_type; +} +EXPORT_SYMBOL_GPL(is_cxl_port); + struct cxl_port *to_cxl_port(struct device *dev) { if (dev_WARN_ONCE(dev, dev->type != &cxl_port_type, @@ -266,6 +273,7 @@ struct cxl_port *to_cxl_port(struct device *dev) return NULL; return container_of(dev, struct cxl_port, dev); } +EXPORT_SYMBOL_GPL(to_cxl_port); static void unregister_port(void *_port) { @@ -424,6 +432,27 @@ static int add_dport(struct cxl_port *port, struct cxl_dport *new) return dup ? -EEXIST : 0; } +/** + * find_dport_by_dev - gets downstream CXL port from a struct device + * @port: cxl [upstream] port that "owns" the downstream port is being queried + * @dev: The device that is backing the downstream port + */ +struct cxl_dport *find_dport_by_dev(struct cxl_port *port, const struct device *dev) +{ + struct cxl_dport *dport; + + device_lock(&port->dev); + list_for_each_entry(dport, &port->dports, list) + if (dport->dport == dev) { + device_unlock(&port->dev); + return dport; + } + + device_unlock(&port->dev); + return NULL; +} +EXPORT_SYMBOL_GPL(find_dport_by_dev); + /** * cxl_add_dport - append downstream port data to a cxl_port * @port: the cxl_port that references this dport @@ -596,6 +625,37 @@ int cxl_decoder_autoremove(struct device *host, struct cxl_decoder *cxld) } EXPORT_SYMBOL_GPL(cxl_decoder_autoremove); +/** + * cxl_pci_dvsec - Gets offset for the given DVSEC id + * @pdev: PCI device to search for the DVSEC + * @dvsec: DVSEC id to look for + * + * Return: offset within the PCI header for the given DVSEC id. 0 if not found + */ +int cxl_pci_dvsec(struct pci_dev *pdev, int dvsec) +{ + int pos; + + pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_DVSEC); + if (!pos) + return 0; + + while (pos) { + u16 vendor, id; + + pci_read_config_word(pdev, pos + PCI_DVSEC_HEADER1, &vendor); + pci_read_config_word(pdev, pos + PCI_DVSEC_HEADER2, &id); + if (vendor == PCI_DVSEC_VENDOR_ID_CXL && dvsec == id) + return pos; + + pos = pci_find_next_ext_capability(pdev, pos, + PCI_EXT_CAP_ID_DVSEC); + } + + return 0; +} +EXPORT_SYMBOL_GPL(cxl_mem_dvsec); + /** * __cxl_driver_register - register a driver for the cxl bus * @cxl_drv: cxl driver structure to attach diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c index c9dd054bd813..0068b5ff5f3e 100644 --- a/drivers/cxl/core/memdev.c +++ b/drivers/cxl/core/memdev.c @@ -337,3 +337,9 @@ void cxl_memdev_exit(void) { unregister_chrdev_region(MKDEV(cxl_mem_major, 0), CXL_MEM_MAX_DEVS); } + +bool is_cxl_mem_capable(struct cxl_memdev *cxlmd) +{ + return !!cxlmd->dev.driver; +} +EXPORT_SYMBOL_GPL(is_cxl_mem_capable); diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index b48bdbefd949..a168520d741b 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -283,8 +283,10 @@ struct cxl_port *devm_cxl_add_port(struct device *host, struct device *uport, resource_size_t component_reg_phys, struct cxl_port *parent_port); +bool is_cxl_port(struct device *dev); int cxl_add_dport(struct cxl_port *port, struct device *dport, int port_id, resource_size_t component_reg_phys); +struct cxl_dport *find_dport_by_dev(struct cxl_port *port, const struct device *dev); struct cxl_decoder *to_cxl_decoder(struct device *dev); bool is_root_decoder(struct device *dev); diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index 811b24451604..88264204c4b9 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -51,6 +51,8 @@ static inline struct cxl_memdev *to_cxl_memdev(struct device *dev) struct cxl_memdev *devm_cxl_add_memdev(struct device *host, struct cxl_mem *cxlm); +bool is_cxl_mem_capable(struct cxl_memdev *cxlmd); + /** * struct cxl_mbox_cmd - A command to be submitted to hardware. * @opcode: (input) The command set and command submitted to hardware. diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index 978a54b0a51a..b6dc34d18a86 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -2,8 +2,10 @@ /* Copyright(c) 2021 Intel Corporation. All rights reserved. */ #include #include +#include #include "cxlmem.h" +#include "pci.h" /** * DOC: cxl mem @@ -17,9 +19,60 @@ * components. */ +static int port_match(struct device *dev, const void *data) +{ + struct cxl_port *port; + + if (!is_cxl_port(dev)) + return 0; + + port = to_cxl_port(dev); + + if (find_dport_by_dev(port, (struct device *)data)) + return 1; + + return 0; +} + +static bool is_cxl_mem_enabled(struct pci_dev *pdev) +{ + int pcie_dvsec; + u16 dvsec_ctrl; + + pcie_dvsec = cxl_pci_dvsec(pdev, PCI_DVSEC_ID_PCIE_DVSEC_CXL_DVSEC_ID); + if (!pcie_dvsec) { + dev_info(&pdev->dev, "Unable to determine CXL protocol support"); + return false; + } + + pci_read_config_word(pdev, + pcie_dvsec + PCI_DVSEC_ID_CXL_PCIE_CTRL_OFFSET, + &dvsec_ctrl); + if (!(dvsec_ctrl & CXL_PCIE_MEM_ENABLE)) { + dev_info(&pdev->dev, "CXL.mem protocol not supported on device"); + return false; + } + + return true; +} + static int cxl_mem_probe(struct device *dev) { - return -EOPNOTSUPP; + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); + struct cxl_mem *cxlm = cxlmd->cxlm; + struct device *pdev_parent = cxlm->dev->parent; + struct pci_dev *pdev = to_pci_dev(cxlm->dev); + struct device *port_dev; + + if (!is_cxl_mem_enabled(pdev)) + return -ENODEV; + + /* TODO: if parent is a switch, this will fail. */ + port_dev = bus_find_device(&cxl_bus_type, NULL, pdev_parent, port_match); + if (!port_dev) + return -ENODEV; + + return 0; } static void cxl_mem_remove(struct device *dev) diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index 6931885c83ce..244b99948c40 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -335,29 +335,6 @@ static void cxl_pci_unmap_regblock(struct cxl_mem *cxlm, void __iomem *base) pci_iounmap(to_pci_dev(cxlm->dev), base); } -static int cxl_pci_dvsec(struct pci_dev *pdev, int dvsec) -{ - int pos; - - pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_DVSEC); - if (!pos) - return 0; - - while (pos) { - u16 vendor, id; - - pci_read_config_word(pdev, pos + PCI_DVSEC_HEADER1, &vendor); - pci_read_config_word(pdev, pos + PCI_DVSEC_HEADER2, &id); - if (vendor == PCI_DVSEC_VENDOR_ID_CXL && dvsec == id) - return pos; - - pos = pci_find_next_ext_capability(pdev, pos, - PCI_EXT_CAP_ID_DVSEC); - } - - return 0; -} - static int cxl_probe_regs(struct cxl_mem *cxlm, void __iomem *base, struct cxl_register_map *map) { diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h index 8c1a58813816..d6b9978d05b0 100644 --- a/drivers/cxl/pci.h +++ b/drivers/cxl/pci.h @@ -11,7 +11,10 @@ */ #define PCI_DVSEC_HEADER1_LENGTH_MASK GENMASK(31, 20) #define PCI_DVSEC_VENDOR_ID_CXL 0x1E98 -#define PCI_DVSEC_ID_CXL 0x0 + +#define PCI_DVSEC_ID_PCIE_DVSEC_CXL_DVSEC_ID 0x0 +#define PCI_DVSEC_ID_CXL_PCIE_CTRL_OFFSET 0xC +#define CXL_PCIE_MEM_ENABLE BIT(2) #define PCI_DVSEC_ID_CXL_REGLOC_DVSEC_ID 0x8 #define PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET 0xC @@ -29,4 +32,6 @@ #define CXL_REGLOC_ADDR_MASK GENMASK(31, 16) +int cxl_pci_dvsec(struct pci_dev *pdev, int dvsec); + #endif /* __CXL_PCI_H__ */ From patchwork Thu Sep 2 19:50:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12472417 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0267EC433EF for ; Thu, 2 Sep 2021 19:50:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D890960FDC for ; Thu, 2 Sep 2021 19:50:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347396AbhIBTvc (ORCPT ); Thu, 2 Sep 2021 15:51:32 -0400 Received: from mga12.intel.com ([192.55.52.136]:41966 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347398AbhIBTva (ORCPT ); Thu, 2 Sep 2021 15:51:30 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10095"; a="198778206" X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="198778206" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:27 -0700 X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="533451636" Received: from kappusam-mobl.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.143.117]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:26 -0700 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 08/13] cxl/mem: Add memdev as a port Date: Thu, 2 Sep 2021 12:50:12 -0700 Message-Id: <20210902195017.2516472-9-ben.widawsky@intel.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210902195017.2516472-1-ben.widawsky@intel.com> References: <20210902195017.2516472-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org CXL endpoints contain HDM decoders that are architecturally the same as a CXL switch, or a CXL hostbridge. While some restrictions are in place for endpoints, they will require the same enumeration logic to determine the number and abilities of the HDM decoders. Utilizing the existing port APIs from cxl_core is the simplest way to gain access to the same set of information that switches and hostbridges have. Signed-off-by: Ben Widawsky Reviewed-by: Jonathan Cameron --- drivers/cxl/core/bus.c | 5 ++++- drivers/cxl/mem.c | 10 +++++++++- 2 files changed, 13 insertions(+), 2 deletions(-) diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c index 56f57302d27b..f26095b40f5c 100644 --- a/drivers/cxl/core/bus.c +++ b/drivers/cxl/core/bus.c @@ -377,7 +377,10 @@ struct cxl_port *devm_cxl_add_port(struct device *host, struct device *uport, dev = &port->dev; if (parent_port) - rc = dev_set_name(dev, "port%d", port->id); + if (host->type == &cxl_memdev_type) + rc = dev_set_name(dev, "devport%d", port->id); + else + rc = dev_set_name(dev, "port%d", port->id); else rc = dev_set_name(dev, "root%d", port->id); if (rc) diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index b6dc34d18a86..9d5a3a29cda1 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -63,6 +63,7 @@ static int cxl_mem_probe(struct device *dev) struct device *pdev_parent = cxlm->dev->parent; struct pci_dev *pdev = to_pci_dev(cxlm->dev); struct device *port_dev; + int rc; if (!is_cxl_mem_enabled(pdev)) return -ENODEV; @@ -72,7 +73,14 @@ static int cxl_mem_probe(struct device *dev) if (!port_dev) return -ENODEV; - return 0; + /* TODO: Obtain component registers */ + rc = PTR_ERR_OR_ZERO(devm_cxl_add_port(&cxlmd->dev, &cxlmd->dev, + CXL_RESOURCE_NONE, + to_cxl_port(port_dev))); + if (rc) + dev_err(dev, "Unable to add devices upstream port"); + + return rc; } static void cxl_mem_remove(struct device *dev) From patchwork Thu Sep 2 19:50:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12472421 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4BA1FC43217 for ; Thu, 2 Sep 2021 19:50:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2CD1861057 for ; Thu, 2 Sep 2021 19:50:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347397AbhIBTvd (ORCPT ); Thu, 2 Sep 2021 15:51:33 -0400 Received: from mga12.intel.com ([192.55.52.136]:41971 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347401AbhIBTva (ORCPT ); Thu, 2 Sep 2021 15:51:30 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10095"; a="198778208" X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="198778208" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:27 -0700 X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="533451641" Received: from kappusam-mobl.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.143.117]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:27 -0700 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 09/13] cxl/pci: Retain map information in cxl_mem_probe Date: Thu, 2 Sep 2021 12:50:13 -0700 Message-Id: <20210902195017.2516472-10-ben.widawsky@intel.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210902195017.2516472-1-ben.widawsky@intel.com> References: <20210902195017.2516472-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org In order for a memdev to participate in cxl_core's port APIs, the physical address of the memdev's component registers is needed. This is accomplished by allocating the array of maps in probe so they can be used after the memdev is created. Signed-off-by: Ben Widawsky --- drivers/cxl/pci.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index 244b99948c40..e4b3549c4580 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -407,21 +407,22 @@ static void cxl_decode_register_block(u32 reg_lo, u32 reg_hi, /** * cxl_pci_setup_regs() - Setup necessary MMIO. * @cxlm: The CXL memory device to communicate with. + * @maps: Array of maps populated by this function. * - * Return: 0 if all necessary registers mapped. + * Return: 0 if all necessary registers mapped. The results are stored in @maps. * * A memory device is required by spec to implement a certain set of MMIO * regions. The purpose of this function is to enumerate and map those * registers. */ -static int cxl_pci_setup_regs(struct cxl_mem *cxlm) +static int cxl_pci_setup_regs(struct cxl_mem *cxlm, struct cxl_register_map maps[]) { struct pci_dev *pdev = to_pci_dev(cxlm->dev); struct device *dev = cxlm->dev; u32 regloc_size, regblocks; void __iomem *base; int regloc, i, n_maps; - struct cxl_register_map *map, maps[CXL_REGLOC_RBI_TYPES]; + struct cxl_register_map *map; int ret = 0; regloc = cxl_pci_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC_DVSEC_ID); @@ -492,6 +493,7 @@ static int cxl_pci_setup_regs(struct cxl_mem *cxlm) static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) { + struct cxl_register_map maps[CXL_REGLOC_RBI_TYPES]; struct cxl_memdev *cxlmd; struct cxl_mem *cxlm; int rc; @@ -511,7 +513,7 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (IS_ERR(cxlm)) return PTR_ERR(cxlm); - rc = cxl_pci_setup_regs(cxlm); + rc = cxl_pci_setup_regs(cxlm, maps); if (rc) return rc; From patchwork Thu Sep 2 19:50:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12472425 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8AB86C433FE for ; Thu, 2 Sep 2021 19:50:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 71F1C61057 for ; Thu, 2 Sep 2021 19:50:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347401AbhIBTvd (ORCPT ); Thu, 2 Sep 2021 15:51:33 -0400 Received: from mga12.intel.com ([192.55.52.136]:41966 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231476AbhIBTvb (ORCPT ); Thu, 2 Sep 2021 15:51:31 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10095"; a="198778209" X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="198778209" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:27 -0700 X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="533451647" Received: from kappusam-mobl.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.143.117]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:27 -0700 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 10/13] cxl/core: Map component registers for ports Date: Thu, 2 Sep 2021 12:50:14 -0700 Message-Id: <20210902195017.2516472-11-ben.widawsky@intel.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210902195017.2516472-1-ben.widawsky@intel.com> References: <20210902195017.2516472-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org Component registers are implemented for CXL.mem/cache operations. The cxl_pci driver handles enumerating CXL devices with the CXL.io protocol. The driver for managing CXL.mem/cache operations will need the component registers mapped and the mapping cannot be shared across two devices. For now, it's fine to relinquish this mapping in cxl_pci. CXL IDE is one exception (perhaps others will exist) where it might be desirable to have the cxl_pci driver do negotiation. For this case, it probably will make sense to create an ephemeral mapping. Further looking, there might need to be a cxl_core mechanism to allow arbitrating access to the component registers. Signed-off-by: Ben Widawsky --- drivers/cxl/core/bus.c | 38 ++++++++++++++++++++++++++++++++++++++ drivers/cxl/core/memdev.c | 11 +++++++---- drivers/cxl/core/regs.c | 6 +++--- drivers/cxl/cxl.h | 4 ++++ drivers/cxl/cxlmem.h | 4 +++- drivers/cxl/mem.c | 3 +-- drivers/cxl/pci.c | 19 +++++++++++++++++-- 7 files changed, 73 insertions(+), 12 deletions(-) diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c index f26095b40f5c..01b6fa8373e4 100644 --- a/drivers/cxl/core/bus.c +++ b/drivers/cxl/core/bus.c @@ -310,6 +310,37 @@ static int devm_cxl_link_uport(struct device *host, struct cxl_port *port) return devm_add_action_or_reset(host, cxl_unlink_uport, port); } +static int cxl_port_map_component_registers(struct cxl_port *port) +{ + struct cxl_register_map map; + struct cxl_component_reg_map *comp_map = &map.component_map; + void __iomem *crb; + + if (port->component_reg_phys == CXL_RESOURCE_NONE) + return 0; + + crb = devm_cxl_iomap_block(&port->dev, + port->component_reg_phys, + /* CXL_COMPONENT_REG_BLOCK_SIZE */ SZ_64K); + if (IS_ERR(crb)) + return PTR_ERR(crb); + + if (!crb) { + dev_err(&port->dev, "No component registers mapped\n"); + return -ENXIO; + } + + cxl_probe_component_regs(&port->dev, crb, comp_map); + if (!comp_map->hdm_decoder.valid) { + dev_err(&port->dev, "HDM decoder registers invalid\n"); + return -ENXIO; + } + + port->regs.hdm_decoder = crb + comp_map->hdm_decoder.offset; + + return 0; +} + static struct cxl_port *cxl_port_alloc(struct device *uport, resource_size_t component_reg_phys, struct cxl_port *parent_port) @@ -398,6 +429,13 @@ struct cxl_port *devm_cxl_add_port(struct device *host, struct device *uport, if (rc) return ERR_PTR(rc); + /* Platform "switch" has no parent port or component registers */ + if (parent_port) { + rc = cxl_port_map_component_registers(port); + if (rc) + return ERR_PTR(rc); + } + return port; err: diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c index 0068b5ff5f3e..85fe42abd29b 100644 --- a/drivers/cxl/core/memdev.c +++ b/drivers/cxl/core/memdev.c @@ -185,7 +185,8 @@ static void cxl_memdev_unregister(void *_cxlmd) } static struct cxl_memdev *cxl_memdev_alloc(struct cxl_mem *cxlm, - const struct file_operations *fops) + const struct file_operations *fops, + unsigned long component_reg_phys) { struct cxl_memdev *cxlmd; struct device *dev; @@ -200,6 +201,7 @@ static struct cxl_memdev *cxl_memdev_alloc(struct cxl_mem *cxlm, if (rc < 0) goto err; cxlmd->id = rc; + cxlmd->component_reg_phys = component_reg_phys; dev = &cxlmd->dev; device_initialize(dev); @@ -275,15 +277,16 @@ static const struct file_operations cxl_memdev_fops = { .llseek = noop_llseek, }; -struct cxl_memdev * -devm_cxl_add_memdev(struct device *host, struct cxl_mem *cxlm) +struct cxl_memdev *devm_cxl_add_memdev(struct device *host, + struct cxl_mem *cxlm, + unsigned long component_reg_phys) { struct cxl_memdev *cxlmd; struct device *dev; struct cdev *cdev; int rc; - cxlmd = cxl_memdev_alloc(cxlm, &cxl_memdev_fops); + cxlmd = cxl_memdev_alloc(cxlm, &cxl_memdev_fops, component_reg_phys); if (IS_ERR(cxlmd)) return cxlmd; diff --git a/drivers/cxl/core/regs.c b/drivers/cxl/core/regs.c index 8535a7b94f28..4ba75fb6779f 100644 --- a/drivers/cxl/core/regs.c +++ b/drivers/cxl/core/regs.c @@ -145,9 +145,8 @@ void cxl_probe_device_regs(struct device *dev, void __iomem *base, } EXPORT_SYMBOL_GPL(cxl_probe_device_regs); -static void __iomem *devm_cxl_iomap_block(struct device *dev, - resource_size_t addr, - resource_size_t length) +void __iomem *devm_cxl_iomap_block(struct device *dev, resource_size_t addr, + resource_size_t length) { void __iomem *ret_val; struct resource *res; @@ -166,6 +165,7 @@ static void __iomem *devm_cxl_iomap_block(struct device *dev, return ret_val; } +EXPORT_SYMBOL_GPL(devm_cxl_iomap_block); int cxl_map_component_regs(struct pci_dev *pdev, struct cxl_component_regs *regs, diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index a168520d741b..4585d03a0a67 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -149,6 +149,8 @@ struct cxl_register_map { }; }; +void __iomem *devm_cxl_iomap_block(struct device *dev, resource_size_t addr, + resource_size_t length); void cxl_probe_component_regs(struct device *dev, void __iomem *base, struct cxl_component_reg_map *map); void cxl_probe_device_regs(struct device *dev, void __iomem *base, @@ -252,6 +254,7 @@ struct cxl_walk_context { * @dports: cxl_dport instances referenced by decoders * @decoder_ida: allocator for decoder ids * @component_reg_phys: component register capability base address (optional) + * @regs: Mapped version of @component_reg_phys */ struct cxl_port { struct device dev; @@ -260,6 +263,7 @@ struct cxl_port { struct list_head dports; struct ida decoder_ida; resource_size_t component_reg_phys; + struct cxl_component_regs regs; }; /** diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index 88264204c4b9..f94624e43b2e 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -41,6 +41,7 @@ struct cxl_memdev { struct cdev cdev; struct cxl_mem *cxlm; int id; + unsigned long component_reg_phys; }; static inline struct cxl_memdev *to_cxl_memdev(struct device *dev) @@ -49,7 +50,8 @@ static inline struct cxl_memdev *to_cxl_memdev(struct device *dev) } struct cxl_memdev *devm_cxl_add_memdev(struct device *host, - struct cxl_mem *cxlm); + struct cxl_mem *cxlm, + unsigned long component_reg_phys); bool is_cxl_mem_capable(struct cxl_memdev *cxlmd); diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index 9d5a3a29cda1..aba9a07d519f 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -73,9 +73,8 @@ static int cxl_mem_probe(struct device *dev) if (!port_dev) return -ENODEV; - /* TODO: Obtain component registers */ rc = PTR_ERR_OR_ZERO(devm_cxl_add_port(&cxlmd->dev, &cxlmd->dev, - CXL_RESOURCE_NONE, + cxlmd->component_reg_phys, to_cxl_port(port_dev))); if (rc) dev_err(dev, "Unable to add devices upstream port"); diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index e4b3549c4580..258190febb5a 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -382,8 +382,12 @@ static int cxl_map_regs(struct cxl_mem *cxlm, struct cxl_register_map *map) switch (map->reg_type) { case CXL_REGLOC_RBI_COMPONENT: +#ifndef CONFIG_CXL_MEM cxl_map_component_regs(pdev, &cxlm->regs.component, map); dev_dbg(dev, "Mapping component registers...\n"); +#else + dev_dbg(dev, "Component registers not mapped for %s\n", KBUILD_MODNAME); +#endif break; case CXL_REGLOC_RBI_MEMDEV: cxl_map_device_regs(pdev, &cxlm->regs.device_regs, map); @@ -493,10 +497,11 @@ static int cxl_pci_setup_regs(struct cxl_mem *cxlm, struct cxl_register_map maps static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) { + unsigned long component_reg_phys = CXL_RESOURCE_NONE; struct cxl_register_map maps[CXL_REGLOC_RBI_TYPES]; struct cxl_memdev *cxlmd; struct cxl_mem *cxlm; - int rc; + int rc, i; /* * Double check the anonymous union trickery in struct cxl_regs @@ -533,7 +538,17 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (rc) return rc; - cxlmd = devm_cxl_add_memdev(&pdev->dev, cxlm); + for (i = 0; i < ARRAY_SIZE(maps); i++) { + struct cxl_register_map *map = &maps[i]; + + if (map->reg_type != CXL_REGLOC_RBI_COMPONENT) + continue; + + component_reg_phys = pci_resource_start(pdev, map->barno) + + map->block_offset; + } + + cxlmd = devm_cxl_add_memdev(&pdev->dev, cxlm, component_reg_phys); if (IS_ERR(cxlmd)) return PTR_ERR(cxlmd); From patchwork Thu Sep 2 19:50:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12472427 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F5A5C43219 for ; Thu, 2 Sep 2021 19:50:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E55E5610CF for ; Thu, 2 Sep 2021 19:50:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347413AbhIBTve (ORCPT ); Thu, 2 Sep 2021 15:51:34 -0400 Received: from mga12.intel.com ([192.55.52.136]:41967 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347403AbhIBTvb (ORCPT ); Thu, 2 Sep 2021 15:51:31 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10095"; a="198778211" X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="198778211" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:28 -0700 X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="533451652" Received: from kappusam-mobl.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.143.117]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:27 -0700 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 11/13] cxl/core: Convert decoder range to resource Date: Thu, 2 Sep 2021 12:50:15 -0700 Message-Id: <20210902195017.2516472-12-ben.widawsky@intel.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210902195017.2516472-1-ben.widawsky@intel.com> References: <20210902195017.2516472-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org Regions will use the resource API in order to help manage allocated space. As regions are children of the decoder, it makes sense that the parent host the main resource to be suballocated by the region. Signed-off-by: Ben Widawsky Reviewed-by: Jonathan Cameron --- drivers/cxl/acpi.c | 12 ++++-------- drivers/cxl/core/bus.c | 4 ++-- drivers/cxl/cxl.h | 4 ++-- 3 files changed, 8 insertions(+), 12 deletions(-) diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c index fd14094bdb3f..26691313d716 100644 --- a/drivers/cxl/acpi.c +++ b/drivers/cxl/acpi.c @@ -125,10 +125,9 @@ static void cxl_add_cfmws_decoders(struct device *dev, cxld->flags = cfmws_to_decoder_flags(cfmws->restrictions); cxld->target_type = CXL_DECODER_EXPANDER; - cxld->range = (struct range) { - .start = cfmws->base_hpa, - .end = cfmws->base_hpa + cfmws->window_size - 1, - }; + cxld->res = (struct resource)DEFINE_RES_MEM_NAMED(cfmws->base_hpa, + cfmws->window_size, + "cfmws"); cxld->interleave_ways = CFMWS_INTERLEAVE_WAYS(cfmws); cxld->interleave_granularity = CFMWS_INTERLEAVE_GRANULARITY(cfmws); @@ -318,10 +317,7 @@ static int add_host_bridge_uport(struct device *match, void *arg) cxld->interleave_ways = 1; cxld->interleave_granularity = PAGE_SIZE; cxld->target_type = CXL_DECODER_EXPANDER; - cxld->range = (struct range) { - .start = 0, - .end = -1, - }; + cxld->res = (struct resource)DEFINE_RES_MEM(0, 0); device_lock(&port->dev); dport = list_first_entry(&port->dports, typeof(*dport), list); diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c index 01b6fa8373e4..d056dbd794a4 100644 --- a/drivers/cxl/core/bus.c +++ b/drivers/cxl/core/bus.c @@ -48,7 +48,7 @@ static ssize_t start_show(struct device *dev, struct device_attribute *attr, { struct cxl_decoder *cxld = to_cxl_decoder(dev); - return sysfs_emit(buf, "%#llx\n", cxld->range.start); + return sysfs_emit(buf, "%#llx\n", cxld->res.start); } static DEVICE_ATTR_RO(start); @@ -57,7 +57,7 @@ static ssize_t size_show(struct device *dev, struct device_attribute *attr, { struct cxl_decoder *cxld = to_cxl_decoder(dev); - return sysfs_emit(buf, "%#llx\n", range_len(&cxld->range)); + return sysfs_emit(buf, "%#llx\n", resource_size(&cxld->res)); } static DEVICE_ATTR_RO(size); diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 4585d03a0a67..e610fa9dd6c8 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -192,7 +192,7 @@ enum cxl_decoder_type { * struct cxl_decoder - CXL address range decode configuration * @dev: this decoder's device * @id: kernel device name id - * @range: address range considered by this decoder + * @res: address space resources considered by this decoder * @interleave_ways: number of cxl_dports in this decode * @interleave_granularity: data stride per dport * @target_type: accelerator vs expander (type2 vs type3) selector @@ -203,7 +203,7 @@ enum cxl_decoder_type { struct cxl_decoder { struct device dev; int id; - struct range range; + struct resource res; int interleave_ways; int interleave_granularity; enum cxl_decoder_type target_type; From patchwork Thu Sep 2 19:50:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12472431 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5ECCC4321E for ; Thu, 2 Sep 2021 19:50:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B98CC61041 for ; Thu, 2 Sep 2021 19:50:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231476AbhIBTvd (ORCPT ); Thu, 2 Sep 2021 15:51:33 -0400 Received: from mga12.intel.com ([192.55.52.136]:41971 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347413AbhIBTvb (ORCPT ); Thu, 2 Sep 2021 15:51:31 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10095"; a="198778212" X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="198778212" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:28 -0700 X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="533451657" Received: from kappusam-mobl.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.143.117]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:28 -0700 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 12/13] cxl/core/bus: Enumerate all HDM decoders Date: Thu, 2 Sep 2021 12:50:16 -0700 Message-Id: <20210902195017.2516472-13-ben.widawsky@intel.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210902195017.2516472-1-ben.widawsky@intel.com> References: <20210902195017.2516472-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org As of the CXL 2.0 specification, every port will have between 1 and 10 HDM decoders available in hardware. These exist in the endpoint, switch, and top level hostbridges. HDM decoders are required for configuration CXL regions, and therefore enumerating them is an important first step. As an example, the below has 4 decoders, a top level CFMWS decoder (0.0), a single decoder in a single host bridge (1.0), and two devices each with 1 decoder (2.0 and 3.0) ├── decoder0.0 -> ../../../devices/platform/ACPI0017:00/root0/decoder0.0 ├── decoder1.0 -> ../../../devices/platform/ACPI0017:00/root0/port1/decoder1.0 ├── decoder2.0 -> ../../../devices/platform/ACPI0017:00/root0/port1/devport2/decoder2.0 ├── decoder3.0 -> ../../../devices/platform/ACPI0017:00/root0/port1/devport3/decoder3.0 Additionally, attributes are added for a port: /sys/bus/cxl/devices/port1 ├── active_decoders ├── decoder_count ├── decoder_enabled ├── max_target_count ... Signed-off-by: Ben Widawsky --- drivers/cxl/core/bus.c | 161 ++++++++++++++++++++++++++++++++++++++++- drivers/cxl/cxl.h | 54 ++++++++++++-- 2 files changed, 209 insertions(+), 6 deletions(-) diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c index d056dbd794a4..b75e42965e89 100644 --- a/drivers/cxl/core/bus.c +++ b/drivers/cxl/core/bus.c @@ -43,6 +43,15 @@ struct attribute_group cxl_base_attribute_group = { .attrs = cxl_base_attributes, }; +static ssize_t enabled_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct cxl_decoder *cxld = to_cxl_decoder(dev); + + return sysfs_emit(buf, "%d\n", !!cxld->decoder_enabled); +} +static DEVICE_ATTR_RO(enabled); + static ssize_t start_show(struct device *dev, struct device_attribute *attr, char *buf) { @@ -130,6 +139,7 @@ static ssize_t target_list_show(struct device *dev, static DEVICE_ATTR_RO(target_list); static struct attribute *cxl_decoder_base_attrs[] = { + &dev_attr_enabled.attr, &dev_attr_start.attr, &dev_attr_size.attr, &dev_attr_locked.attr, @@ -249,8 +259,48 @@ static void cxl_port_release(struct device *dev) kfree(port); } +static ssize_t active_decoders_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct cxl_port *port = to_cxl_port(dev); + + return sysfs_emit(buf, "%*pbl\n", port->decoder_cap.count, + port->used_decoders); +} +static DEVICE_ATTR_RO(active_decoders); + +static ssize_t decoder_count_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct cxl_port *port = to_cxl_port(dev); + + return sysfs_emit(buf, "%d\n", port->decoder_cap.count); +} +static DEVICE_ATTR_RO(decoder_count); + +static ssize_t max_target_count_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct cxl_port *port = to_cxl_port(dev); + + return sysfs_emit(buf, "%d\n", port->decoder_cap.target_count); +} +static DEVICE_ATTR_RO(max_target_count); + +static struct attribute *cxl_port_caps_attributes[] = { + &dev_attr_active_decoders.attr, + &dev_attr_decoder_count.attr, + &dev_attr_max_target_count.attr, + NULL, +}; + +struct attribute_group cxl_port_attribute_group = { + .attrs = cxl_port_caps_attributes, +}; + static const struct attribute_group *cxl_port_attribute_groups[] = { &cxl_base_attribute_group, + &cxl_port_attribute_group, NULL, }; @@ -341,6 +391,107 @@ static int cxl_port_map_component_registers(struct cxl_port *port) return 0; } +static int port_populate_caps(struct cxl_port *port) +{ + void __iomem *hdm_decoder = port->regs.hdm_decoder; + u32 hdm_cap; + + hdm_cap = readl(hdm_decoder + CXL_HDM_DECODER_CAP_OFFSET); + + port->used_decoders = devm_bitmap_zalloc(&port->dev, + cxl_hdm_decoder_count(hdm_cap), + GFP_KERNEL); + if (!port->used_decoders) + return -ENOMEM; + + port->decoder_cap.count = cxl_hdm_decoder_count(hdm_cap); + port->decoder_cap.target_count = + FIELD_GET(CXL_HDM_DECODER_TARGET_COUNT_MASK, hdm_cap); + port->decoder_cap.interleave11_8 = + FIELD_GET(CXL_HDM_DECODER_INTERLEAVE_11_8, hdm_cap); + port->decoder_cap.interleave14_12 = + FIELD_GET(CXL_HDM_DECODER_INTERLEAVE_14_12, hdm_cap); + + return 0; +} + +static int cxl_port_enumerate_hdm_decoders(struct device *host, + struct cxl_port *port) +{ + void __iomem *hdm_decoder = port->regs.hdm_decoder; + u32 hdm_ctrl; + int i, rc = 0; + + rc = port_populate_caps(port); + if (rc) + return rc; + + if (port->decoder_cap.count == 0) { + dev_warn(host, "Found no HDM decoders\n"); + return -ENODEV; + } + + for (i = 0; i < port->decoder_cap.count; i++) { + enum cxl_decoder_type type = CXL_DECODER_EXPANDER; + struct resource res = DEFINE_RES_MEM(0, 0); + struct cxl_decoder *cxld; + int iw = 0, ig = 0; + u32 ctrl; + + cxld = cxl_decoder_alloc(port, is_endpoint_decoder(host) ? 0 : + port->decoder_cap.target_count); + if (IS_ERR(cxld)) { + dev_warn(host, "Failed to allocate the decoder\n"); + return PTR_ERR(cxld); + } + + ctrl = readl(hdm_decoder + CXL_HDM_DECODER0_CTRL_OFFSET(i)); + cxld->decoder_enabled = + !!FIELD_GET(CXL_HDM_DECODER0_CTRL_COMMITTED, ctrl); + /* If the decoder is already active, parse info */ + if (cxld->decoder_enabled) { + set_bit(i, port->used_decoders); + iw = cxl_hdm_decoder_iw(ctrl); + ig = cxl_hdm_decoder_ig(ctrl); + if (FIELD_GET(CXL_HDM_DECODER0_CTRL_TYPE, ctrl) == 0) + type = CXL_DECODER_ACCELERATOR; + res.start = readl(hdm_decoder + + CXL_HDM_DECODER0_BASE_LOW_OFFSET(i)); + res.start |= + (u64)readl(hdm_decoder + + CXL_HDM_DECODER0_BASE_HIGH_OFFSET(i)) + << 32; + } + + cxld->target_type = type; + cxld->res = res; + cxld->interleave_ways = iw; + cxld->interleave_granularity = ig; + + rc = cxl_decoder_add(host, cxld, NULL); + if (rc) { + dev_warn(host, "Failed to add decoder (%d)\n", rc); + kfree(cxld); + goto out; + } + } + + /* + * Enable CXL.mem decoding via MMIO for endpoint devices + * + * TODO: If a memory device was configured to participate in a region by + * system firmware via DVSEC, this will break that region. + */ + if (is_endpoint_decoder(host)) { + hdm_ctrl = readl(hdm_decoder + CXL_HDM_DECODER_CTRL_OFFSET); + writel(hdm_ctrl | CXL_HDM_DECODER_ENABLE, + hdm_decoder + CXL_HDM_DECODER_CTRL_OFFSET); + } + +out: + return rc; +} + static struct cxl_port *cxl_port_alloc(struct device *uport, resource_size_t component_reg_phys, struct cxl_port *parent_port) @@ -432,8 +583,16 @@ struct cxl_port *devm_cxl_add_port(struct device *host, struct device *uport, /* Platform "switch" has no parent port or component registers */ if (parent_port) { rc = cxl_port_map_component_registers(port); - if (rc) + if (rc) { + dev_err(host, "Failed to map component registers\n"); return ERR_PTR(rc); + } + + rc = cxl_port_enumerate_hdm_decoders(host, port); + if (rc) { + dev_err(host, "Failed to enumerate HDM decoders\n"); + return ERR_PTR(rc); + } } return port; diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index e610fa9dd6c8..6759fe097e12 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -36,11 +36,19 @@ #define CXL_HDM_DECODER_CAP_OFFSET 0x0 #define CXL_HDM_DECODER_COUNT_MASK GENMASK(3, 0) #define CXL_HDM_DECODER_TARGET_COUNT_MASK GENMASK(7, 4) -#define CXL_HDM_DECODER0_BASE_LOW_OFFSET 0x10 -#define CXL_HDM_DECODER0_BASE_HIGH_OFFSET 0x14 -#define CXL_HDM_DECODER0_SIZE_LOW_OFFSET 0x18 -#define CXL_HDM_DECODER0_SIZE_HIGH_OFFSET 0x1c -#define CXL_HDM_DECODER0_CTRL_OFFSET 0x20 +#define CXL_HDM_DECODER_INTERLEAVE_11_8 BIT(8) +#define CXL_HDM_DECODER_INTERLEAVE_14_12 BIT(9) +#define CXL_HDM_DECODER_CTRL_OFFSET 0x0 +#define CXL_HDM_DECODER_ENABLE BIT(1) +#define CXL_HDM_DECODER0_BASE_LOW_OFFSET(i) (0x10 + (i) * 0x20) +#define CXL_HDM_DECODER0_BASE_HIGH_OFFSET(i) (0x14 + (i) * 0x20) +#define CXL_HDM_DECODER0_SIZE_LOW_OFFSET(i) (0x18 + (i) * 0x20) +#define CXL_HDM_DECODER0_SIZE_HIGH_OFFSET(i) (0x1c + (i) * 0x20) +#define CXL_HDM_DECODER0_CTRL_OFFSET(i) (0x20 + (i) * 0x20) +#define CXL_HDM_DECODER0_CTRL_IG_MASK GENMASK(3, 0) +#define CXL_HDM_DECODER0_CTRL_IW_MASK GENMASK(7, 4) +#define CXL_HDM_DECODER0_CTRL_COMMITTED BIT(10) +#define CXL_HDM_DECODER0_CTRL_TYPE BIT(12) static inline int cxl_hdm_decoder_count(u32 cap_hdr) { @@ -49,6 +57,20 @@ static inline int cxl_hdm_decoder_count(u32 cap_hdr) return val ? val * 2 : 1; } +static inline int cxl_hdm_decoder_ig(u32 ctrl) +{ + int val = FIELD_GET(CXL_HDM_DECODER0_CTRL_IG_MASK, ctrl); + + return 8 + val; +} + +static inline int cxl_hdm_decoder_iw(u32 ctrl) +{ + int val = FIELD_GET(CXL_HDM_DECODER0_CTRL_IW_MASK, ctrl); + + return 1 << val; +} + /* CXL 2.0 8.2.8.1 Device Capabilities Array Register */ #define CXLDEV_CAP_ARRAY_OFFSET 0x0 #define CXLDEV_CAP_ARRAY_CAP_ID 0 @@ -188,6 +210,12 @@ enum cxl_decoder_type { */ #define CXL_DECODER_MAX_INTERLEAVE 16 +/* + * Current specification goes up to 10 double that seems a reasonable + * software max for the foreseeable future + */ +#define CXL_DECODER_MAX_COUNT 20 + /** * struct cxl_decoder - CXL address range decode configuration * @dev: this decoder's device @@ -197,6 +225,7 @@ enum cxl_decoder_type { * @interleave_granularity: data stride per dport * @target_type: accelerator vs expander (type2 vs type3) selector * @flags: memory type capabilities and locking + * @decoder_enabled: Is this decoder currently decoding * @nr_targets: number of elements in @target * @target: active ordered target list in current decoder configuration */ @@ -208,6 +237,7 @@ struct cxl_decoder { int interleave_granularity; enum cxl_decoder_type target_type; unsigned long flags; + bool decoder_enabled; int nr_targets; struct cxl_dport *target[]; }; @@ -255,6 +285,12 @@ struct cxl_walk_context { * @decoder_ida: allocator for decoder ids * @component_reg_phys: component register capability base address (optional) * @regs: Mapped version of @component_reg_phys + * @used_decoders: Bitmap of currently active decoders for the port + * @decoder_cap: Capabilities of all decoders contained by the port + * @decoder_cap.count: Count of HDM decoders for the port + * @decoder_cap.target_count: Max number of interleaved downstream ports + * @decoder_cap.interleave11_8: Are address bits 11-8 available for interleave + * @decoder_cap.interleave14_12: Are address bits 14-12 available for interleave */ struct cxl_port { struct device dev; @@ -264,6 +300,14 @@ struct cxl_port { struct ida decoder_ida; resource_size_t component_reg_phys; struct cxl_component_regs regs; + + unsigned long *used_decoders; + struct { + int count; + int target_count; + bool interleave11_8; + bool interleave14_12; + } decoder_cap; }; /** From patchwork Thu Sep 2 19:50:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12472429 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8473CC4167B for ; Thu, 2 Sep 2021 19:50:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 651BE61041 for ; Thu, 2 Sep 2021 19:50:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347418AbhIBTve (ORCPT ); Thu, 2 Sep 2021 15:51:34 -0400 Received: from mga12.intel.com ([192.55.52.136]:41966 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347415AbhIBTvb (ORCPT ); Thu, 2 Sep 2021 15:51:31 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10095"; a="198778213" X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="198778213" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:29 -0700 X-IronPort-AV: E=Sophos;i="5.85,263,1624345200"; d="scan'208";a="533451662" Received: from kappusam-mobl.amr.corp.intel.com (HELO bad-guy.kumite) ([10.252.143.117]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 12:50:28 -0700 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , Alison Schofield , Dan Williams , Ira Weiny , Jonathan Cameron , Vishal Verma Subject: [PATCH 13/13] cxl/mem: Enumerate switch decoders Date: Thu, 2 Sep 2021 12:50:17 -0700 Message-Id: <20210902195017.2516472-14-ben.widawsky@intel.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210902195017.2516472-1-ben.widawsky@intel.com> References: <20210902195017.2516472-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org Switches work much in the same way as hostbridges. The primary difference is that they are enumerated, and probed via regular PCIe mechanisms. A switch has 1 upstream port, and n downstream ports. Ultimately a memory device attached to a switch can determine if it's in a CXL capable subset of the topology if the switch is CXL capable. The algorithm introduced enables enumerating switches in a CXL topology. It walks up the topology until it finds a root port (which is enumerated by the cxl_acpi driver). Once at the top, it walks back down adding all downstream ports along the way. Note that practically speaking there can be at most 3 levels of switches with the current 2.0 spec. This is because there is a max interleave of 8 defined in the spec. If there is a single hostbridge and only 1 root port was CXL capable, you could have 3 levels of x2 switches, making the x8 interleave. However, as far as the spec is concerned, there can be infinite number of switches since a x1 switch is allowed, and future versions of the spec may allow for a larger total interleave. Signed-off-by: Ben Widawsky --- drivers/cxl/mem.c | 130 +++++++++++++++++++++++++++++++++++++++++++++- drivers/cxl/pci.c | 8 --- drivers/cxl/pci.h | 8 +++ 3 files changed, 137 insertions(+), 9 deletions(-) diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index aba9a07d519f..dc8ca43d5bfc 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -56,6 +56,133 @@ static bool is_cxl_mem_enabled(struct pci_dev *pdev) return true; } +/* TODO: dedeuplicate this from drivers/cxl/pci.c? */ +static unsigned long get_component_regs(struct pci_dev *pdev) +{ + unsigned long component_reg_phys = CXL_RESOURCE_NONE; + u32 regloc_size, regblocks; + int regloc, i; + + regloc = cxl_pci_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC_DVSEC_ID); + if (!regloc) { + dev_err(&pdev->dev, "register location dvsec not found\n"); + return component_reg_phys; + } + + /* Get the size of the Register Locator DVSEC */ + pci_read_config_dword(pdev, regloc + PCI_DVSEC_HEADER1, ®loc_size); + regloc_size = FIELD_GET(PCI_DVSEC_HEADER1_LENGTH_MASK, regloc_size); + + regloc += PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET; + regblocks = (regloc_size - PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET) / 8; + + for (i = 0; i < regblocks; i++, regloc += 8) { + u32 reg_lo, reg_hi; + u8 reg_type; + u64 offset; + u8 bar; + + pci_read_config_dword(pdev, regloc, ®_lo); + pci_read_config_dword(pdev, regloc + 4, ®_hi); + + cxl_decode_register_block(reg_lo, reg_hi, &bar, &offset, + ®_type); + + if (reg_type != CXL_REGLOC_RBI_COMPONENT) + continue; + + component_reg_phys = pci_resource_start(pdev, bar) + offset; + } + + return component_reg_phys; +} + +static void enumerate_uport(struct device *dev) +{ + struct pci_dev *pdev = to_pci_dev(dev); + + /* + * Parent's parent should be another uport, since we don't have root + * ports here + */ + if (dev_WARN_ONCE(dev, !dev->parent->parent, "No grandparent port\n")) + return; + + if (!is_cxl_port(dev->parent->parent)) { + dev_info(dev, "Parent of uport isn't a CXL port (%s)\n", + dev_name(dev->parent->parent)); + return; + } + + devm_cxl_add_port(dev, dev, get_component_regs(pdev), + to_cxl_port(dev->parent)); +} + +static void enumerate_dport(struct device *dev) +{ + struct pci_dev *pdev = to_pci_dev(dev); + u32 port_num, lnkcap; + + if (dev_WARN_ONCE(dev, !dev->parent, "No parent port\n")) + return; + + if (!is_cxl_port(dev->parent)) { + dev_info(dev, "Uport isn't a CXL port %s\n", + dev_name(dev->parent)); + return; + } + + /* TODO: deduplicate from drivers/cxl/acpi.c? */ + if (pci_read_config_dword(pdev, pci_pcie_cap(pdev) + PCI_EXP_LNKCAP, + &lnkcap) != PCIBIOS_SUCCESSFUL) + return; + port_num = FIELD_GET(PCI_EXP_LNKCAP_PN, lnkcap); + + cxl_add_dport(to_cxl_port(dev->parent), dev, port_num, + get_component_regs(pdev)); +} + +/* + * Walk up the topology until we get to the root port (ie. parent is a + * cxl port). From there walk back down adding the additional ports. If the + * parent isn't a PCIe switch (upstream or downstream port), the downstream + * endpoint(s) cannot be CXL enabled. + * + * XXX: It's possible that cxl_acpi hasn't yet enumerated the root ports, and + * so that will rescan the CXL bus, thus coming back here. + */ +static void enumerate_switches(struct device *dev) +{ + struct pci_dev *pdev; + int type; + + if (unlikely(!dev)) + return; + + if (unlikely(!dev_is_pci(dev))) + return; + + pdev = to_pci_dev(dev); + + if (unlikely(!pci_is_pcie(pdev))) + return; + + if (!is_cxl_mem_enabled(pdev)) + return; + + type = pci_pcie_type(pdev); + + if (type != PCI_EXP_TYPE_UPSTREAM && type != PCI_EXP_TYPE_DOWNSTREAM) + return; + + enumerate_switches(dev->parent); + + if (type == PCI_EXP_TYPE_UPSTREAM) + enumerate_uport(dev); + if (type == PCI_EXP_TYPE_DOWNSTREAM) + enumerate_dport(dev); +} + static int cxl_mem_probe(struct device *dev) { struct cxl_memdev *cxlmd = to_cxl_memdev(dev); @@ -68,7 +195,8 @@ static int cxl_mem_probe(struct device *dev) if (!is_cxl_mem_enabled(pdev)) return -ENODEV; - /* TODO: if parent is a switch, this will fail. */ + enumerate_switches(dev->parent); + port_dev = bus_find_device(&cxl_bus_type, NULL, pdev_parent, port_match); if (!port_dev) return -ENODEV; diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index 258190febb5a..e338f2f759d0 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -400,14 +400,6 @@ static int cxl_map_regs(struct cxl_mem *cxlm, struct cxl_register_map *map) return 0; } -static void cxl_decode_register_block(u32 reg_lo, u32 reg_hi, - u8 *bar, u64 *offset, u8 *reg_type) -{ - *offset = ((u64)reg_hi << 32) | (reg_lo & CXL_REGLOC_ADDR_MASK); - *bar = FIELD_GET(CXL_REGLOC_BIR_MASK, reg_lo); - *reg_type = FIELD_GET(CXL_REGLOC_RBI_MASK, reg_lo); -} - /** * cxl_pci_setup_regs() - Setup necessary MMIO. * @cxlm: The CXL memory device to communicate with. diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h index d6b9978d05b0..8250d487e39d 100644 --- a/drivers/cxl/pci.h +++ b/drivers/cxl/pci.h @@ -34,4 +34,12 @@ int cxl_pci_dvsec(struct pci_dev *pdev, int dvsec); +static inline void cxl_decode_register_block(u32 reg_lo, u32 reg_hi, u8 *bar, + u64 *offset, u8 *reg_type) +{ + *offset = ((u64)reg_hi << 32) | (reg_lo & CXL_REGLOC_ADDR_MASK); + *bar = FIELD_GET(CXL_REGLOC_BIR_MASK, reg_lo); + *reg_type = FIELD_GET(CXL_REGLOC_RBI_MASK, reg_lo); +} + #endif /* __CXL_PCI_H__ */