From patchwork Mon Jul 10 16:47:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naresh Solanki X-Patchwork-Id: 13307397 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 290D1EB64DC for ; Mon, 10 Jul 2023 16:47:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230245AbjGJQrP (ORCPT ); Mon, 10 Jul 2023 12:47:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230227AbjGJQrO (ORCPT ); Mon, 10 Jul 2023 12:47:14 -0400 Received: from mail-wm1-x336.google.com (mail-wm1-x336.google.com [IPv6:2a00:1450:4864:20::336]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 67DBAF4 for ; Mon, 10 Jul 2023 09:47:13 -0700 (PDT) Received: by mail-wm1-x336.google.com with SMTP id 5b1f17b1804b1-3fbc656873eso60614635e9.1 for ; Mon, 10 Jul 2023 09:47:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=9elements.com; s=google; t=1689007632; x=1691599632; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=v/t6qi+ZyC/g4PTUhHmZaN3Bd0vINMuJ9GK/6w6tG3Y=; b=V1OY1iakMDqwmTsPidEnik2bW4XQPEfQTlKGXnR6DntnWGM9nQ6OTH48seUGb92Z+U ijjbTIy/sx9vD90XypKgtbEF6jZaw+kcCltr/KrQzGnNrY5mf11v+YaLXgYJW21crfh5 LYRraPw94Ds60uV2jNH49ETwBrWE/f7bMy/ce326mc4LWHBQY3Jp0MXJyWVZSVk0wdyL noxFjfWKjKC1Mm4AMB2+g8cfadTsT09TgcBWslOQFz1Hv0oFF0k1oTaILGyWnN/YJRdI LkKF8/bZlQwa96jxM2Z5GbEohPyqpEMd2rptUrvgzrO73X4WR7pJCTjFnZZXANlDU6yT LFyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689007632; x=1691599632; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=v/t6qi+ZyC/g4PTUhHmZaN3Bd0vINMuJ9GK/6w6tG3Y=; b=IoOCwyTHmegwWnGMPgqoP3WIJBFNtthRIMALd6uVZTaOiNMxZsHT4zOnJ0b/QAeBEK PED+SY5U2AMTjE5TfDKrVNJW/jRR3WC/Y/o3MDti42qop5efjf7iawHKu7LGPHiciOYa TNbStU0K7mcdZ4ITYvDnoJoRFzC+FkwxzheYN77tEbxSSao/iHZSYZAPhi59YdnmSa7N JH61/ONY9hJtMgL073mBMi+sXdp0XOtuhLjS6RkafSiwj4XkuXAt45vV0CN0S6iShCwA BA8oURwNxJxURF/C29I53ZVWw7kcxwKZ2j1cv2yIzLSSTAegbx22lI0OwkfT2Fd3Yk3b TE3A== X-Gm-Message-State: ABy/qLbTbuXfqHem74XA7VM9uqhVU6v8yWNQ9sEMUT9OkXBvzisIK1nl 7IMYFm8AXADpjFNT0MIsUWQEgPLsOc6XJzFws7X7wg== X-Google-Smtp-Source: APBJJlE7AnlZK1RZo5W0XD+eszpl4R8XQAZAgpYb4pdp/gmgV/vbap7e6x7nIqCaMoyNeSDIomMr/w== X-Received: by 2002:a7b:ce18:0:b0:3fb:b280:f548 with SMTP id m24-20020a7bce18000000b003fbb280f548mr16210018wmc.0.1689007631860; Mon, 10 Jul 2023 09:47:11 -0700 (PDT) Received: from stroh80.sec.9e.network (ip-078-094-000-051.um19.pools.vodafone-ip.de. [78.94.0.51]) by smtp.gmail.com with ESMTPSA id n2-20020a05600c294200b003fbe791a0e8sm394704wmd.0.2023.07.10.09.47.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 10 Jul 2023 09:47:11 -0700 (PDT) From: Naresh Solanki X-Google-Original-From: Naresh Solanki To: devicetree@vger.kernel.org, Guenter Roeck , Jean Delvare , Iwona Winiarska Cc: linux-kernel@vger.kernel.org, linux-hwmon@vger.kernel.org, Patrick Rudolph Subject: [PATCH 1/2] hwmon: (dimmtemp) Support more than 32 DIMMs Date: Mon, 10 Jul 2023 18:47:03 +0200 Message-ID: <20230710164705.3985996-1-Naresh.Solanki@9elements.com> X-Mailer: git-send-email 2.41.0 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-hwmon@vger.kernel.org From: Patrick Rudolph This patch introduces support for handling more than 32 DIMMs by utilizing bitmap operations. The changes ensure that the driver can handle a higher number of DIMMs efficiently. Signed-off-by: Patrick Rudolph --- drivers/hwmon/peci/dimmtemp.c | 26 +++++++++++++++----------- 1 file changed, 15 insertions(+), 11 deletions(-) base-commit: 4dbbaf8fbdbd13adc80731b2452257857e4c2d8b diff --git a/drivers/hwmon/peci/dimmtemp.c b/drivers/hwmon/peci/dimmtemp.c index ed968401f93c..ce89da3937a0 100644 --- a/drivers/hwmon/peci/dimmtemp.c +++ b/drivers/hwmon/peci/dimmtemp.c @@ -219,19 +219,21 @@ static int check_populated_dimms(struct peci_dimmtemp *priv) { int chan_rank_max = priv->gen_info->chan_rank_max; int dimm_idx_max = priv->gen_info->dimm_idx_max; - u32 chan_rank_empty = 0; - u32 dimm_mask = 0; - int chan_rank, dimm_idx, ret; + DECLARE_BITMAP(dimm_mask, DIMM_NUMS_MAX); + DECLARE_BITMAP(chan_rank_empty, CHAN_RANK_MAX); + + int chan_rank, dimm_idx, ret, i; u32 pcs; - BUILD_BUG_ON(BITS_PER_TYPE(chan_rank_empty) < CHAN_RANK_MAX); - BUILD_BUG_ON(BITS_PER_TYPE(dimm_mask) < DIMM_NUMS_MAX); if (chan_rank_max * dimm_idx_max > DIMM_NUMS_MAX) { WARN_ONCE(1, "Unsupported number of DIMMs - chan_rank_max: %d, dimm_idx_max: %d", chan_rank_max, dimm_idx_max); return -EINVAL; } + bitmap_zero(dimm_mask, DIMM_NUMS_MAX); + bitmap_zero(chan_rank_empty, CHAN_RANK_MAX); + for (chan_rank = 0; chan_rank < chan_rank_max; chan_rank++) { ret = peci_pcs_read(priv->peci_dev, PECI_PCS_DDR_DIMM_TEMP, chan_rank, &pcs); if (ret) { @@ -242,7 +244,7 @@ static int check_populated_dimms(struct peci_dimmtemp *priv) * detection to be performed at a later point in time. */ if (ret == -EINVAL) { - chan_rank_empty |= BIT(chan_rank); + bitmap_set(chan_rank_empty, chan_rank, 1); continue; } @@ -251,7 +253,7 @@ static int check_populated_dimms(struct peci_dimmtemp *priv) for (dimm_idx = 0; dimm_idx < dimm_idx_max; dimm_idx++) if (__dimm_temp(pcs, dimm_idx)) - dimm_mask |= BIT(chan_rank * dimm_idx_max + dimm_idx); + bitmap_set(dimm_mask, chan_rank * dimm_idx_max + dimm_idx, 1); } /* @@ -260,7 +262,7 @@ static int check_populated_dimms(struct peci_dimmtemp *priv) * host platform boot. Retrying a couple of times lets us make sure * that the state is persistent. */ - if (chan_rank_empty == GENMASK(chan_rank_max - 1, 0)) { + if (bitmap_full(chan_rank_empty, chan_rank_max)) { if (priv->no_dimm_retry_count < NO_DIMM_RETRY_COUNT_MAX) { priv->no_dimm_retry_count++; @@ -274,14 +276,16 @@ static int check_populated_dimms(struct peci_dimmtemp *priv) * It's possible that memory training is not done yet. In this case we * defer the detection to be performed at a later point in time. */ - if (!dimm_mask) { + if (bitmap_empty(dimm_mask, DIMM_NUMS_MAX)) { priv->no_dimm_retry_count = 0; return -EAGAIN; } - dev_dbg(priv->dev, "Scanned populated DIMMs: %#x\n", dimm_mask); + for_each_set_bit(i, dimm_mask, DIMM_NUMS_MAX) { + dev_dbg(priv->dev, "Found DIMM%#x\n", i); + } - bitmap_from_arr32(priv->dimm_mask, &dimm_mask, DIMM_NUMS_MAX); + bitmap_copy(priv->dimm_mask, dimm_mask, DIMM_NUMS_MAX); return 0; } From patchwork Mon Jul 10 16:47:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naresh Solanki X-Patchwork-Id: 13307398 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0229BEB64D9 for ; Mon, 10 Jul 2023 16:47:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230167AbjGJQrV (ORCPT ); Mon, 10 Jul 2023 12:47:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230253AbjGJQrT (ORCPT ); Mon, 10 Jul 2023 12:47:19 -0400 Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com [IPv6:2a00:1450:4864:20::329]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22CAAF2 for ; Mon, 10 Jul 2023 09:47:18 -0700 (PDT) Received: by mail-wm1-x329.google.com with SMTP id 5b1f17b1804b1-3fbc5d5742bso52808405e9.2 for ; Mon, 10 Jul 2023 09:47:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=9elements.com; s=google; t=1689007636; x=1691599636; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=irZH5XR1T/qwnIyZTduD5l9vw+nb1uCdEROQyV19vtI=; b=Dz9FexAqOifkjtZ0/rZjnlB5kjjRexvmEsWnAVPg/iyKY4fBWOoIDGAJ72A/gzSeV9 n12cbYDmLsHYlGcMHUtPT0NF0kB2QZ3a1GrFGw3PEia7lbaSzPD3NPN1Z2Jw1tzYUvLU oRoO9N7hXYtlB1f9DNxsKEBZ5QSG11l3z/9xuj6tY0oLla5mL84Ld+dC3I3O61EFIpkN kDXugtGMgKTyqdhBvkQUsdnqmms85BVNbIRBdYWZfcV40scneK1z0SOspwHMUW/O030O ryHNVk7KnTFAAS629I4ry84ab2I8H7q3icoV+9MeFQxF5+TGS2ROtTGsF+0LKfj1TJ2E Axig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689007636; x=1691599636; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=irZH5XR1T/qwnIyZTduD5l9vw+nb1uCdEROQyV19vtI=; b=Tpzz0mRX2HG85MANKplJz0Co/g5TFO4eiGUHQtt1kIbPdJY/7+Tbmo9c+6U8c/WxvH ZX9i2XYU01RxhKAgcKeU9Jt5rR/VJ2TXgAh7mjnt1vDjuilTynhk6vdpmqbzCYyv4n1D kfOXNxhRSWdT6ilV+stX7QhjtyqjzKQePemwziPRebKN7XIqIflmkDbGRAwo1shlEkGX 3S9RlN+Ex15CvGLN4TjtTINldGAxlHBAb4nhyzfz4TFSSfwtlQyfQu1JA5cfNL7CO1rw 87CrCkuMuGUJRq2AuTq9JvPDFFdfwjDtON/B2xUBn/lbi99oc+fO34cAYNVIIkBeZ9ox /Ang== X-Gm-Message-State: ABy/qLZeISAVBodDMVpcR/97Q6QTWoWm3+LKH/ibGEXy5kSaywaEa2hS 3TDCYPfDrlb4IgTLyE7Zr9OhvA== X-Google-Smtp-Source: APBJJlGKH+jb9DaxU3qp5wI2jxJU1bnePW8y+5lpEySQhjtJAUag+JT6QNsQeAz72lQDztMSe/9YUw== X-Received: by 2002:a05:600c:364c:b0:3fb:b5cb:1130 with SMTP id y12-20020a05600c364c00b003fbb5cb1130mr12897496wmq.34.1689007636636; Mon, 10 Jul 2023 09:47:16 -0700 (PDT) Received: from stroh80.sec.9e.network (ip-078-094-000-051.um19.pools.vodafone-ip.de. [78.94.0.51]) by smtp.gmail.com with ESMTPSA id n2-20020a05600c294200b003fbe791a0e8sm394704wmd.0.2023.07.10.09.47.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 10 Jul 2023 09:47:16 -0700 (PDT) From: Naresh Solanki X-Google-Original-From: Naresh Solanki To: devicetree@vger.kernel.org, Guenter Roeck , Jean Delvare , Iwona Winiarska Cc: linux-kernel@vger.kernel.org, linux-hwmon@vger.kernel.org, Patrick Rudolph Subject: [PATCH 2/2] hwmon: (dimmtemp) Add Sapphire Rappids support Date: Mon, 10 Jul 2023 18:47:04 +0200 Message-ID: <20230710164705.3985996-2-Naresh.Solanki@9elements.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230710164705.3985996-1-Naresh.Solanki@9elements.com> References: <20230710164705.3985996-1-Naresh.Solanki@9elements.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-hwmon@vger.kernel.org From: Patrick Rudolph This patch extends the functionality of the hwmon (dimmtemp) to include support for Sapphire Rappids platform. Sapphire Rappids can accommodate up to 8 CPUs, each with 16 DIMMs. To accommodate this configuration, the maximum supported DIMM count is increased, and the corresponding Sapphire Rappids ID and threshold code are added. The patch has been tested on a 4S system with 64 DIMMs installed. Default thresholds are utilized for Sapphire Rappids, as accessing the threshold requires accessing the UBOX device on Uncore bus 0, which can only be achieved using MSR access. The non-PCI-compliant MMIO BARs are not available for this purpose. Signed-off-by: Patrick Rudolph --- drivers/hwmon/peci/dimmtemp.c | 24 +++++++++++++++++++++++- 1 file changed, 23 insertions(+), 1 deletion(-) diff --git a/drivers/hwmon/peci/dimmtemp.c b/drivers/hwmon/peci/dimmtemp.c index ce89da3937a0..ea4ac5a023cf 100644 --- a/drivers/hwmon/peci/dimmtemp.c +++ b/drivers/hwmon/peci/dimmtemp.c @@ -30,8 +30,10 @@ #define DIMM_IDX_MAX_ON_ICX 2 #define CHAN_RANK_MAX_ON_ICXD 4 #define DIMM_IDX_MAX_ON_ICXD 2 +#define CHAN_RANK_MAX_ON_SPR 128 +#define DIMM_IDX_MAX_ON_SPR 2 -#define CHAN_RANK_MAX CHAN_RANK_MAX_ON_HSX +#define CHAN_RANK_MAX CHAN_RANK_MAX_ON_SPR #define DIMM_IDX_MAX DIMM_IDX_MAX_ON_HSX #define DIMM_NUMS_MAX (CHAN_RANK_MAX * DIMM_IDX_MAX) @@ -534,6 +536,15 @@ read_thresholds_icx(struct peci_dimmtemp *priv, int dimm_order, int chan_rank, u return 0; } +static int +read_thresholds_spr(struct peci_dimmtemp *priv, int dimm_order, int chan_rank, u32 *data) +{ + /* Use defaults */ + *data = (95 << 16) | (90 << 8); + + return 0; +} + static const struct dimm_info dimm_hsx = { .chan_rank_max = CHAN_RANK_MAX_ON_HSX, .dimm_idx_max = DIMM_IDX_MAX_ON_HSX, @@ -576,6 +587,13 @@ static const struct dimm_info dimm_icxd = { .read_thresholds = &read_thresholds_icx, }; +static const struct dimm_info dimm_spr = { + .chan_rank_max = CHAN_RANK_MAX_ON_SPR, + .dimm_idx_max = DIMM_IDX_MAX_ON_SPR, + .min_peci_revision = 0x40, + .read_thresholds = &read_thresholds_spr, +}; + static const struct auxiliary_device_id peci_dimmtemp_ids[] = { { .name = "peci_cpu.dimmtemp.hsx", @@ -601,6 +619,10 @@ static const struct auxiliary_device_id peci_dimmtemp_ids[] = { .name = "peci_cpu.dimmtemp.icxd", .driver_data = (kernel_ulong_t)&dimm_icxd, }, + { + .name = "peci_cpu.dimmtemp.spr", + .driver_data = (kernel_ulong_t)&dimm_spr, + }, { } }; MODULE_DEVICE_TABLE(auxiliary, peci_dimmtemp_ids);