From patchwork Thu May 6 22:36:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 12243587 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E407AC433ED for ; Thu, 6 May 2021 22:37:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C5034613B5 for ; Thu, 6 May 2021 22:37:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231256AbhEFWiL (ORCPT ); Thu, 6 May 2021 18:38:11 -0400 Received: from mga02.intel.com ([134.134.136.20]:7715 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231159AbhEFWiK (ORCPT ); Thu, 6 May 2021 18:38:10 -0400 IronPort-SDR: I1LhFAC+k5WsX5HVAxonjIz+MyIzNFquyTOWXeDLO2sVOaGTPK5Z8i8eVPevqjQCH8EytoR4DM kvP5DRwb24ow== X-IronPort-AV: E=McAfee;i="6200,9189,9976"; a="185726267" X-IronPort-AV: E=Sophos;i="5.82,279,1613462400"; d="scan'208";a="185726267" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 May 2021 15:37:10 -0700 IronPort-SDR: lrsSp+RXpFD1W1E/w335wMHAvegV0TxG/bPWkiDEuBGClcfSRuPwTUI/ZzjmqXVtgLwEnaO7lC 5hDoCvLaAP1w== X-IronPort-AV: E=Sophos;i="5.82,279,1613462400"; d="scan'208";a="622630595" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 May 2021 15:37:10 -0700 From: ira.weiny@intel.com To: Ben Widawsky , Dan Williams Cc: Ira Weiny , Alison Schofield , Vishal Verma , Jonathan Cameron , linux-cxl@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/4] cxl/mem: Fully decode device capability header Date: Thu, 6 May 2021 15:36:51 -0700 Message-Id: <20210506223654.1310516-2-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20210506223654.1310516-1-ira.weiny@intel.com> References: <20210506223654.1310516-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Ira Weiny Previously only the capability ID and offset were decoded. Create a version MASK and decode the additional version and length fields of the header. Signed-off-by: Ira Weiny --- drivers/cxl/core.c | 15 ++++++++++++--- drivers/cxl/cxl.h | 1 + 2 files changed, 13 insertions(+), 3 deletions(-) diff --git a/drivers/cxl/core.c b/drivers/cxl/core.c index b3c3532b53f7..21553386e218 100644 --- a/drivers/cxl/core.c +++ b/drivers/cxl/core.c @@ -501,12 +501,21 @@ void cxl_setup_device_regs(struct device *dev, void __iomem *base, for (cap = 1; cap <= cap_count; cap++) { void __iomem *register_block; - u32 offset; + u32 hdr, offset, __maybe_unused length; u16 cap_id; + u8 version; + + hdr = readl(base + cap * 0x10); + + cap_id = FIELD_GET(CXLDEV_CAP_HDR_CAP_ID_MASK, hdr); + version = FIELD_GET(CXLDEV_CAP_HDR_VERSION_MASK, hdr); + if (version != 1) + dev_err(dev, "Vendor cap ID: %x incorrect version (0x%x)\n", + cap_id, version); - cap_id = FIELD_GET(CXLDEV_CAP_HDR_CAP_ID_MASK, - readl(base + cap * 0x10)); offset = readl(base + cap * 0x10 + 0x4); + length = readl(base + cap * 0x10 + 0x8); + register_block = base + offset; switch (cap_id) { diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 0211f44c95a2..9b315c069557 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -15,6 +15,7 @@ #define CXLDEV_CAP_ARRAY_COUNT_MASK GENMASK_ULL(47, 32) /* CXL 2.0 8.2.8.2 CXL Device Capability Header Register */ #define CXLDEV_CAP_HDR_CAP_ID_MASK GENMASK(15, 0) +#define CXLDEV_CAP_HDR_VERSION_MASK GENMASK(23, 16) /* CXL 2.0 8.2.8.2.1 CXL Device Capabilities */ #define CXLDEV_CAP_CAP_ID_DEVICE_STATUS 0x1 #define CXLDEV_CAP_CAP_ID_PRIMARY_MAILBOX 0x2 From patchwork Thu May 6 22:36:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 12243589 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B45B9C433ED for ; Thu, 6 May 2021 22:37:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 96FFC61041 for ; Thu, 6 May 2021 22:37:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231270AbhEFWiM (ORCPT ); Thu, 6 May 2021 18:38:12 -0400 Received: from mga02.intel.com ([134.134.136.20]:7718 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231206AbhEFWiL (ORCPT ); Thu, 6 May 2021 18:38:11 -0400 IronPort-SDR: 7voYEGs7tKHRJeDXFN5Dt+Zf9ahIsytvQR0gga38s7io00hL0KFSuqj6XnSWlNIuzmEval8oKN aXxU9y2Fb/zg== X-IronPort-AV: E=McAfee;i="6200,9189,9976"; a="185726271" X-IronPort-AV: E=Sophos;i="5.82,279,1613462400"; d="scan'208";a="185726271" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 May 2021 15:37:12 -0700 IronPort-SDR: X+S2v99tFzIsN/cKSnb6kp1t+ywqQx3b8+C5mPQk4Z1YpvmC+Kb+iGEOgGXPlXBGF9WdcxNeca aloX9yTmrhkA== X-IronPort-AV: E=Sophos;i="5.82,279,1613462400"; d="scan'208";a="622630607" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 May 2021 15:37:11 -0700 From: ira.weiny@intel.com To: Ben Widawsky , Dan Williams Cc: Ira Weiny , Alison Schofield , Vishal Verma , Jonathan Cameron , linux-cxl@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/4] cxl/mem: Reserve all device regions at once Date: Thu, 6 May 2021 15:36:52 -0700 Message-Id: <20210506223654.1310516-3-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20210506223654.1310516-1-ira.weiny@intel.com> References: <20210506223654.1310516-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Ira Weiny In order to remap individual register sets each bar region must be reserved prior to mapping. Because the details of individual register sets are contained within the BARs themselves, the bar must be mapped 2 times, once to extract this information and a second time for each register set. Rather than attempt to reserve each BAR individually and track if that bar has been reserved. Open code pcim_iomap_regions() by first reserving all memory regions on the device and then mapping the bars individually as needed. Signed-off-by: Ira Weiny --- drivers/cxl/pci.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index 191603b4e10b..40016709b310 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -926,9 +926,9 @@ static void __iomem *cxl_mem_map_regblock(struct cxl_mem *cxlm, u32 reg_lo, u32 { struct pci_dev *pdev = cxlm->pdev; struct device *dev = &pdev->dev; + void __iomem *rc; u64 offset; u8 bar; - int rc; offset = ((u64)reg_hi << 32) | (reg_lo & CXL_REGLOC_ADDR_MASK); bar = FIELD_GET(CXL_REGLOC_BIR_MASK, reg_lo); @@ -940,13 +940,14 @@ static void __iomem *cxl_mem_map_regblock(struct cxl_mem *cxlm, u32 reg_lo, u32 return (void __iomem *)ERR_PTR(-ENXIO); } - rc = pcim_iomap_regions(pdev, BIT(bar), pci_name(pdev)); - if (rc) { + rc = pcim_iomap(pdev, bar, 0); + if (!rc) { dev_err(dev, "failed to map registers\n"); - return (void __iomem *)ERR_PTR(rc); + return (void __iomem *)ERR_PTR(-ENOMEM); } - dev_dbg(dev, "Mapped CXL Memory Device resource\n"); + dev_dbg(dev, "Mapped CXL Memory Device resource bar %u @ 0x%llx\n", + bar, offset); return pcim_iomap_table(pdev)[bar] + offset; } @@ -999,6 +1000,9 @@ static int cxl_mem_setup_regs(struct cxl_mem *cxlm) return -ENXIO; } + if (pci_request_mem_regions(pdev, pci_name(pdev))) + return -ENODEV; + /* Get the size of the Register Locator DVSEC */ pci_read_config_dword(pdev, regloc + PCI_DVSEC_HEADER1, ®loc_size); regloc_size = FIELD_GET(PCI_DVSEC_HEADER1_LENGTH_MASK, regloc_size); From patchwork Thu May 6 22:36:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 12243591 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29AE4C433B4 for ; Thu, 6 May 2021 22:37:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0A092613B5 for ; Thu, 6 May 2021 22:37:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231230AbhEFWiM (ORCPT ); Thu, 6 May 2021 18:38:12 -0400 Received: from mga02.intel.com ([134.134.136.20]:7715 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231262AbhEFWiL (ORCPT ); Thu, 6 May 2021 18:38:11 -0400 IronPort-SDR: HTr0+sdPB/XRInHtaJiE0OhW+Y6pL1N5rFy8HQjW07lpIOQ8er2ojT6tM3LLnT9KeNiGZHdklh qvXn+yHrK8RA== X-IronPort-AV: E=McAfee;i="6200,9189,9976"; a="185726274" X-IronPort-AV: E=Sophos;i="5.82,279,1613462400"; d="scan'208";a="185726274" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 May 2021 15:37:13 -0700 IronPort-SDR: OiO7tSPIZ6kPeGpcC4D8iowwCheXuvB+rUpUoYIzvEQ47zROLSGvXFs74998NI4hzOGhw34dbq 0PMetbKnsYiQ== X-IronPort-AV: E=Sophos;i="5.82,279,1613462400"; d="scan'208";a="622630615" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 May 2021 15:37:13 -0700 From: ira.weiny@intel.com To: Ben Widawsky , Dan Williams Cc: Ira Weiny , Alison Schofield , Vishal Verma , Jonathan Cameron , linux-cxl@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/4] cxl/mem: Introduce cxl_decode_register_block() Date: Thu, 6 May 2021 15:36:53 -0700 Message-Id: <20210506223654.1310516-4-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20210506223654.1310516-1-ira.weiny@intel.com> References: <20210506223654.1310516-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Ira Weiny Each register block located in the DVSEC needs to be decoded from 2 words, 'register offset high' and 'register offset low'. Create a function, cxl_decode_register_block() to perform this decode and return the bar, offset, and register type of the register block. Then use the values decoded in cxl_mem_map_regblock() instead of passing the raw registers. Signed-off-by: Ira Weiny --- drivers/cxl/pci.c | 26 ++++++++++++++++++-------- 1 file changed, 18 insertions(+), 8 deletions(-) diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index 40016709b310..cee14de0f251 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -922,16 +922,12 @@ static struct cxl_mem *cxl_mem_create(struct pci_dev *pdev) return cxlm; } -static void __iomem *cxl_mem_map_regblock(struct cxl_mem *cxlm, u32 reg_lo, u32 reg_hi) +static void __iomem *cxl_mem_map_regblock(struct cxl_mem *cxlm, + u8 bar, u64 offset) { struct pci_dev *pdev = cxlm->pdev; struct device *dev = &pdev->dev; void __iomem *rc; - u64 offset; - u8 bar; - - offset = ((u64)reg_hi << 32) | (reg_lo & CXL_REGLOC_ADDR_MASK); - bar = FIELD_GET(CXL_REGLOC_BIR_MASK, reg_lo); /* Basic sanity check that BAR is big enough */ if (pci_resource_len(pdev, bar) < offset) { @@ -975,6 +971,14 @@ static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec) return 0; } +static void cxl_decode_register_block(u32 reg_lo, u32 reg_hi, + u8 *bar, u64 *offset, u8 *reg_type) +{ + *offset = ((u64)reg_hi << 32) | (reg_lo & CXL_REGLOC_ADDR_MASK); + *bar = FIELD_GET(CXL_REGLOC_BIR_MASK, reg_lo); + *reg_type = FIELD_GET(CXL_REGLOC_RBI_MASK, reg_lo); +} + /** * cxl_mem_setup_regs() - Setup necessary MMIO. * @cxlm: The CXL memory device to communicate with. @@ -1013,15 +1017,21 @@ static int cxl_mem_setup_regs(struct cxl_mem *cxlm) for (i = 0; i < regblocks; i++, regloc += 8) { u32 reg_lo, reg_hi; u8 reg_type; + u64 offset; + u8 bar; /* "register low and high" contain other bits */ pci_read_config_dword(pdev, regloc, ®_lo); pci_read_config_dword(pdev, regloc + 4, ®_hi); - reg_type = FIELD_GET(CXL_REGLOC_RBI_MASK, reg_lo); + cxl_decode_register_block(reg_lo, reg_hi, &bar, &offset, + ®_type); + + dev_dbg(dev, "Found register block in bar %u @ 0x%llx of type %u\n", + bar, offset, reg_type); if (reg_type == CXL_REGLOC_RBI_MEMDEV) { - base = cxl_mem_map_regblock(cxlm, reg_lo, reg_hi); + base = cxl_mem_map_regblock(cxlm, bar, offset); if (IS_ERR(base)) return PTR_ERR(base); break; From patchwork Thu May 6 22:36:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 12243593 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C1EAC433ED for ; Thu, 6 May 2021 22:37:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1BECE61289 for ; Thu, 6 May 2021 22:37:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231300AbhEFWiR (ORCPT ); Thu, 6 May 2021 18:38:17 -0400 Received: from mga02.intel.com ([134.134.136.20]:7720 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231288AbhEFWiP (ORCPT ); Thu, 6 May 2021 18:38:15 -0400 IronPort-SDR: aTj69W2eFU02dEBSd61sHkrUN0CT8ar4kPvn252WLJgraXcDy/OFA9yXomvhhiCdAi2l/VS6pH mFtLelsLseBA== X-IronPort-AV: E=McAfee;i="6200,9189,9976"; a="185726278" X-IronPort-AV: E=Sophos;i="5.82,279,1613462400"; d="scan'208";a="185726278" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 May 2021 15:37:14 -0700 IronPort-SDR: 77lX7bN9If9vdrUzhVqvPDyz4SdSKt7UglqFKfc+8LEFf7cj07bS8c/hdU4seLzjqcrz7T9Xlg tXRClt7RDyNg== X-IronPort-AV: E=Sophos;i="5.82,279,1613462400"; d="scan'208";a="622630623" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 May 2021 15:37:14 -0700 From: ira.weiny@intel.com To: Ben Widawsky , Dan Williams Cc: Ira Weiny , Alison Schofield , Vishal Verma , Jonathan Cameron , linux-cxl@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 4/4] cxl/mem: Map registers based on capabilities Date: Thu, 6 May 2021 15:36:54 -0700 Message-Id: <20210506223654.1310516-5-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20210506223654.1310516-1-ira.weiny@intel.com> References: <20210506223654.1310516-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Ira Weiny The information required to map registers based on capabilities is contained within the bars themselves. This means the bar must be mapped to read the information needed and then unmapped to map the individual parts of the BAR based on capabilities. Change cxl_setup_device_regs() to return a register map, change the name to cxl_probe_device_regs(). Allocate and place these register maps on a list while processing each register block. After probing all blocks go back to map the individual registers blocks as specified in the map and free the maps created. Signed-off-by: Ira Weiny --- drivers/cxl/core.c | 73 ++++++++++++++++++++++------- drivers/cxl/cxl.h | 33 ++++++++++++-- drivers/cxl/pci.c | 111 ++++++++++++++++++++++++++++++++++++--------- 3 files changed, 175 insertions(+), 42 deletions(-) diff --git a/drivers/cxl/core.c b/drivers/cxl/core.c index 21553386e218..b8c7ca9d3203 100644 --- a/drivers/cxl/core.c +++ b/drivers/cxl/core.c @@ -4,6 +4,7 @@ #include #include #include +#include #include "cxl.h" /** @@ -479,18 +480,13 @@ struct cxl_port *devm_cxl_add_port(struct device *host, } EXPORT_SYMBOL_GPL(devm_cxl_add_port); -/* - * cxl_setup_device_regs() - Detect CXL Device register blocks - * @dev: Host device of the @base mapping - * @base: mapping of CXL 2.0 8.2.8 CXL Device Register Interface - */ -void cxl_setup_device_regs(struct device *dev, void __iomem *base, - struct cxl_device_regs *regs) +void cxl_probe_device_regs(struct device *dev, void __iomem *base, + struct cxl_device_reg_map *map) { int cap, cap_count; u64 cap_array; - *regs = (struct cxl_device_regs) { 0 }; + *map = (struct cxl_device_reg_map){ 0 }; cap_array = readq(base + CXLDEV_CAP_ARRAY_OFFSET); if (FIELD_GET(CXLDEV_CAP_ARRAY_ID_MASK, cap_array) != @@ -500,8 +496,7 @@ void cxl_setup_device_regs(struct device *dev, void __iomem *base, cap_count = FIELD_GET(CXLDEV_CAP_ARRAY_COUNT_MASK, cap_array); for (cap = 1; cap <= cap_count; cap++) { - void __iomem *register_block; - u32 hdr, offset, __maybe_unused length; + u32 hdr, offset, length; u16 cap_id; u8 version; @@ -516,23 +511,28 @@ void cxl_setup_device_regs(struct device *dev, void __iomem *base, offset = readl(base + cap * 0x10 + 0x4); length = readl(base + cap * 0x10 + 0x8); - register_block = base + offset; - switch (cap_id) { case CXLDEV_CAP_CAP_ID_DEVICE_STATUS: dev_dbg(dev, "found Status capability (0x%x)\n", offset); - regs->status = register_block; + + map->status.valid = true; + map->status.offset = offset; + map->status.size = length; break; case CXLDEV_CAP_CAP_ID_PRIMARY_MAILBOX: dev_dbg(dev, "found Mailbox capability (0x%x)\n", offset); - regs->mbox = register_block; + map->mbox.valid = true; + map->mbox.offset = offset; + map->mbox.size = length; break; case CXLDEV_CAP_CAP_ID_SECONDARY_MAILBOX: dev_dbg(dev, "found Secondary Mailbox capability (0x%x)\n", offset); break; case CXLDEV_CAP_CAP_ID_MEMDEV: dev_dbg(dev, "found Memory Device capability (0x%x)\n", offset); - regs->memdev = register_block; + map->memdev.valid = true; + map->memdev.offset = offset; + map->memdev.size = length; break; default: if (cap_id > 0x8000) @@ -543,7 +543,48 @@ void cxl_setup_device_regs(struct device *dev, void __iomem *base, } } } -EXPORT_SYMBOL_GPL(cxl_setup_device_regs); +EXPORT_SYMBOL_GPL(cxl_probe_device_regs); + +int cxl_map_device_regs(struct pci_dev *pdev, + struct cxl_device_regs *regs, + struct cxl_register_map *map) +{ + struct device *dev = &pdev->dev; + resource_size_t phys_addr; + + phys_addr = pci_resource_start(pdev, map->barno); + phys_addr += map->block_offset; + + if (map->device_map.status.valid) { + resource_size_t addr; + resource_size_t length; + + addr = phys_addr + map->device_map.status.offset; + length = map->device_map.status.size; + regs->status = devm_ioremap(dev, addr, length); + } + + if (map->device_map.mbox.valid) { + resource_size_t addr; + resource_size_t length; + + addr = phys_addr + map->device_map.mbox.offset; + length = map->device_map.mbox.size; + regs->mbox = devm_ioremap(dev, addr, length); + } + + if (map->device_map.memdev.valid) { + resource_size_t addr; + resource_size_t length; + + addr = phys_addr + map->device_map.memdev.offset; + length = map->device_map.memdev.size; + regs->memdev = devm_ioremap(dev, addr, length); + } + + return 0; +} +EXPORT_SYMBOL_GPL(cxl_map_device_regs); struct bus_type cxl_bus_type = { .name = "cxl", diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 9b315c069557..afc18ee89795 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -55,9 +55,7 @@ struct cxl_device_regs { /* * Note, the anonymous union organization allows for per * register-block-type helper routines, without requiring block-type - * agnostic code to include the prefix. I.e. - * cxl_setup_device_regs(&cxlm->regs.dev) vs readl(cxlm->regs.mbox). - * The specificity reads naturally from left-to-right. + * agnostic code to include the prefix. */ struct cxl_regs { union { @@ -68,8 +66,33 @@ struct cxl_regs { }; }; -void cxl_setup_device_regs(struct device *dev, void __iomem *base, - struct cxl_device_regs *regs); +struct cxl_reg_map { + bool valid; + unsigned long offset; + unsigned long size; +}; + +struct cxl_device_reg_map { + struct cxl_reg_map status; + struct cxl_reg_map mbox; + struct cxl_reg_map memdev; +}; + +struct cxl_register_map { + struct list_head list; + u64 block_offset; + u8 reg_type; + u8 barno; + union { + struct cxl_device_reg_map device_map; + }; +}; + +void cxl_probe_device_regs(struct device *dev, void __iomem *base, + struct cxl_device_reg_map *map); +int cxl_map_device_regs(struct pci_dev *pdev, + struct cxl_device_regs *regs, + struct cxl_register_map *map); /* * Address space properties derived from: diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index cee14de0f251..97361c0e8a32 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #include #include @@ -936,7 +937,7 @@ static void __iomem *cxl_mem_map_regblock(struct cxl_mem *cxlm, return (void __iomem *)ERR_PTR(-ENXIO); } - rc = pcim_iomap(pdev, bar, 0); + rc = pci_iomap(pdev, bar, 0); if (!rc) { dev_err(dev, "failed to map registers\n"); return (void __iomem *)ERR_PTR(-ENOMEM); @@ -945,7 +946,12 @@ static void __iomem *cxl_mem_map_regblock(struct cxl_mem *cxlm, dev_dbg(dev, "Mapped CXL Memory Device resource bar %u @ 0x%llx\n", bar, offset); - return pcim_iomap_table(pdev)[bar] + offset; + return rc; +} + +static void cxl_mem_unmap_regblock(struct cxl_mem *cxlm, void __iomem *base) +{ + pci_iounmap(cxlm->pdev, base); } static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec) @@ -971,6 +977,52 @@ static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec) return 0; } +static int cxl_probe_regs(struct cxl_mem *cxlm, void __iomem *base, + struct cxl_register_map *map) +{ + struct pci_dev *pdev = cxlm->pdev; + struct device *dev = &pdev->dev; + struct cxl_device_reg_map *dev_map; + + switch (map->reg_type) { + case CXL_REGLOC_RBI_MEMDEV: + dev_map = &map->device_map; + cxl_probe_device_regs(dev, base, dev_map); + if (!dev_map->status.valid || !dev_map->mbox.valid || + !dev_map->memdev.valid) { + dev_err(dev, "registers not found: %s%s%s\n", + !dev_map->status.valid ? "status " : "", + !dev_map->mbox.valid ? "status " : "", + !dev_map->memdev.valid ? "status " : ""); + return -ENXIO; + } + + dev_dbg(dev, "Probing device registers...\n"); + break; + default: + break; + } + + return 0; +} + +static int cxl_map_regs(struct cxl_mem *cxlm, struct cxl_register_map *map) +{ + struct pci_dev *pdev = cxlm->pdev; + struct device *dev = &pdev->dev; + + switch (map->reg_type) { + case CXL_REGLOC_RBI_MEMDEV: + cxl_map_device_regs(pdev, &cxlm->regs.device_regs, map); + dev_dbg(dev, "Probing device registers...\n"); + break; + default: + break; + } + + return 0; +} + static void cxl_decode_register_block(u32 reg_lo, u32 reg_hi, u8 *bar, u64 *offset, u8 *reg_type) { @@ -991,12 +1043,14 @@ static void cxl_decode_register_block(u32 reg_lo, u32 reg_hi, */ static int cxl_mem_setup_regs(struct cxl_mem *cxlm) { - struct cxl_regs *regs = &cxlm->regs; struct pci_dev *pdev = cxlm->pdev; struct device *dev = &pdev->dev; u32 regloc_size, regblocks; void __iomem *base; int regloc, i; + struct cxl_register_map *map, *n; + LIST_HEAD(register_maps); + int ret = 0; regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC_OFFSET); if (!regloc) { @@ -1020,7 +1074,14 @@ static int cxl_mem_setup_regs(struct cxl_mem *cxlm) u64 offset; u8 bar; - /* "register low and high" contain other bits */ + map = kzalloc(sizeof(*map), GFP_KERNEL); + if (!map) { + ret = -ENOMEM; + goto free_maps; + } + + list_add(&map->list, ®ister_maps); + pci_read_config_dword(pdev, regloc, ®_lo); pci_read_config_dword(pdev, regloc + 4, ®_hi); @@ -1030,30 +1091,38 @@ static int cxl_mem_setup_regs(struct cxl_mem *cxlm) dev_dbg(dev, "Found register block in bar %u @ 0x%llx of type %u\n", bar, offset, reg_type); - if (reg_type == CXL_REGLOC_RBI_MEMDEV) { - base = cxl_mem_map_regblock(cxlm, bar, offset); - if (IS_ERR(base)) - return PTR_ERR(base); - break; + base = cxl_mem_map_regblock(cxlm, bar, offset); + if (IS_ERR(base)) { + ret = PTR_ERR(base); + goto free_maps; } - } - if (i == regblocks) { - dev_err(dev, "Missing register locator for device registers\n"); - return -ENXIO; + map->barno = bar; + map->block_offset = offset; + map->reg_type = reg_type; + + ret = cxl_probe_regs(cxlm, base + offset, map); + + /* Always unmap the regblock regardless of probe success */ + cxl_mem_unmap_regblock(cxlm, base); + + if (ret) + goto free_maps; } - cxl_setup_device_regs(dev, base, ®s->device_regs); + list_for_each_entry(map, ®ister_maps, list) { + ret = cxl_map_regs(cxlm, map); + if (ret) + goto free_maps; + } - if (!regs->status || !regs->mbox || !regs->memdev) { - dev_err(dev, "registers not found: %s%s%s\n", - !regs->status ? "status " : "", - !regs->mbox ? "mbox " : "", - !regs->memdev ? "memdev" : ""); - return -ENXIO; +free_maps: + list_for_each_entry_safe(map, n, ®ister_maps, list) { + list_del(&map->list); + kfree(map); } - return 0; + return ret; } static struct cxl_memdev *to_cxl_memdev(struct device *dev)