From patchwork Mon Apr 14 09:37:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 14050061 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1793CC369B6 for ; Mon, 14 Apr 2025 09:49:24 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9834C10E516; Mon, 14 Apr 2025 09:49:23 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="Y9XPHfOe"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) by gabe.freedesktop.org (Postfix) with ESMTPS id A6F9A10E516; Mon, 14 Apr 2025 09:49:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744624162; x=1776160162; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ciby7S4K5tiBfPSBg+I9XMeCsqEGDlgG5DdkPfZhqGY=; b=Y9XPHfOeq4W6bADWhNtF+mkJ0t6tuooizsl7EE6if4BaVu3AKHl1mBfE qKAAZ+WfrVQUj13FhmZRo2Fi+374MBgprqDGHjYRQ921X6T+k6BxJoN3y +WzblM1kYUVDC4Wcy0BeunEJvlKGFYfSVrIn64vgOsprMfnQcLn1qeVJh W042NIesU5P8/hAb0pZ8GHIJ/zgrgYDuo70auaBQjBZN6qzhh58/igKWG AgSMdahzc+qG8DPHs6p+io5OQvkYORIO45kFY14MCUNxd5D83srsUgSBX b1zFaa5m3kYTrmmW7onaMed/aCDEM8lueN2rc8SgVXb9UcnDTVfckvMvV g==; X-CSE-ConnectionGUID: kQGGEAvBQpS3Nfppjb82wg== X-CSE-MsgGUID: V+m7YDxjQmiv9k7gsWlSBA== X-IronPort-AV: E=McAfee;i="6700,10204,11402"; a="45222610" X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="45222610" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:49:21 -0700 X-CSE-ConnectionGUID: lyAGeSDtR02lWVx5lbOeFA== X-CSE-MsgGUID: jAQ5oXuNQRC39sIEz22iAg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="152954835" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:49:15 -0700 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alexander Usyskin Subject: [PATCH v8 01/12] mtd: core: always create master device Date: Mon, 14 Apr 2025 12:37:52 +0300 Message-ID: <20250414093803.2133463-2-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250414093803.2133463-1-alexander.usyskin@intel.com> References: <20250414093803.2133463-1-alexander.usyskin@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Create master device without partition when CONFIG_MTD_PARTITIONED_MASTER flag is unset. This streamlines device tree and allows to anchor runtime power management on master device in all cases. Signed-off-by: Alexander Usyskin --- drivers/mtd/mtdchar.c | 2 +- drivers/mtd/mtdcore.c | 152 ++++++++++++++++++++++++--------- drivers/mtd/mtdcore.h | 2 +- drivers/mtd/mtdpart.c | 16 ++-- include/linux/mtd/partitions.h | 2 +- 5 files changed, 123 insertions(+), 51 deletions(-) diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c index 8dc4f5c493fc..391d81ad960c 100644 --- a/drivers/mtd/mtdchar.c +++ b/drivers/mtd/mtdchar.c @@ -559,7 +559,7 @@ static int mtdchar_blkpg_ioctl(struct mtd_info *mtd, /* Sanitize user input */ p.devname[BLKPG_DEVNAMELTH - 1] = '\0'; - return mtd_add_partition(mtd, p.devname, p.start, p.length); + return mtd_add_partition(mtd, p.devname, p.start, p.length, NULL); case BLKPG_DEL_PARTITION: diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c index 5ba9a741f5ac..429d8c16baf0 100644 --- a/drivers/mtd/mtdcore.c +++ b/drivers/mtd/mtdcore.c @@ -68,7 +68,13 @@ static struct class mtd_class = { .pm = MTD_CLS_PM_OPS, }; +static struct class mtd_master_class = { + .name = "mtd_master", + .pm = MTD_CLS_PM_OPS, +}; + static DEFINE_IDR(mtd_idr); +static DEFINE_IDR(mtd_master_idr); /* These are exported solely for the purpose of mtd_blkdevs.c. You should not use them for _anything_ else */ @@ -83,8 +89,9 @@ EXPORT_SYMBOL_GPL(__mtd_next_device); static LIST_HEAD(mtd_notifiers); - +#define MTD_MASTER_DEVS 255 #define MTD_DEVT(index) MKDEV(MTD_CHAR_MAJOR, (index)*2) +static dev_t mtd_master_devt; /* REVISIT once MTD uses the driver model better, whoever allocates * the mtd_info will probably want to use the release() hook... @@ -104,6 +111,17 @@ static void mtd_release(struct device *dev) device_destroy(&mtd_class, index + 1); } +static void mtd_master_release(struct device *dev) +{ + struct mtd_info *mtd = dev_get_drvdata(dev); + + idr_remove(&mtd_master_idr, mtd->index); + of_node_put(mtd_get_of_node(mtd)); + + if (mtd_is_partition(mtd)) + release_mtd_partition(mtd); +} + static void mtd_device_release(struct kref *kref) { struct mtd_info *mtd = container_of(kref, struct mtd_info, refcnt); @@ -367,6 +385,11 @@ static const struct device_type mtd_devtype = { .release = mtd_release, }; +static const struct device_type mtd_master_devtype = { + .name = "mtd_master", + .release = mtd_master_release, +}; + static bool mtd_expert_analysis_mode; #ifdef CONFIG_DEBUG_FS @@ -634,13 +657,13 @@ static void mtd_check_of_node(struct mtd_info *mtd) /** * add_mtd_device - register an MTD device * @mtd: pointer to new MTD device info structure + * @partitioned: create partitioned device * * Add a device to the list of MTD devices present in the system, and * notify each currently active MTD 'user' of its arrival. Returns * zero on success or non-zero on failure. */ - -int add_mtd_device(struct mtd_info *mtd) +int add_mtd_device(struct mtd_info *mtd, bool partitioned) { struct device_node *np = mtd_get_of_node(mtd); struct mtd_info *master = mtd_get_master(mtd); @@ -687,10 +710,17 @@ int add_mtd_device(struct mtd_info *mtd) ofidx = -1; if (np) ofidx = of_alias_get_id(np, "mtd"); - if (ofidx >= 0) - i = idr_alloc(&mtd_idr, mtd, ofidx, ofidx + 1, GFP_KERNEL); - else - i = idr_alloc(&mtd_idr, mtd, 0, 0, GFP_KERNEL); + if (partitioned) { + if (ofidx >= 0) + i = idr_alloc(&mtd_idr, mtd, ofidx, ofidx + 1, GFP_KERNEL); + else + i = idr_alloc(&mtd_idr, mtd, 0, 0, GFP_KERNEL); + } else { + if (ofidx >= 0) + i = idr_alloc(&mtd_master_idr, mtd, ofidx, ofidx + 1, GFP_KERNEL); + else + i = idr_alloc(&mtd_master_idr, mtd, 0, 0, GFP_KERNEL); + } if (i < 0) { error = i; goto fail_locked; @@ -738,10 +768,18 @@ int add_mtd_device(struct mtd_info *mtd) /* Caller should have set dev.parent to match the * physical device, if appropriate. */ - mtd->dev.type = &mtd_devtype; - mtd->dev.class = &mtd_class; - mtd->dev.devt = MTD_DEVT(i); - error = dev_set_name(&mtd->dev, "mtd%d", i); + if (partitioned) { + mtd->dev.type = &mtd_devtype; + mtd->dev.class = &mtd_class; + mtd->dev.devt = MTD_DEVT(i); + dev_set_name(&mtd->dev, "mtd%d", i); + error = dev_set_name(&mtd->dev, "mtd%d", i); + } else { + mtd->dev.type = &mtd_master_devtype; + mtd->dev.class = &mtd_master_class; + mtd->dev.devt = MKDEV(MAJOR(mtd_master_devt), i); + error = dev_set_name(&mtd->dev, "mtd_master%d", i); + } if (error) goto fail_devname; dev_set_drvdata(&mtd->dev, mtd); @@ -749,6 +787,7 @@ int add_mtd_device(struct mtd_info *mtd) of_node_get(mtd_get_of_node(mtd)); error = device_register(&mtd->dev); if (error) { + pr_err("mtd: %s device_register fail %d\n", mtd->name, error); put_device(&mtd->dev); goto fail_added; } @@ -760,10 +799,13 @@ int add_mtd_device(struct mtd_info *mtd) mtd_debugfs_populate(mtd); - device_create(&mtd_class, mtd->dev.parent, MTD_DEVT(i) + 1, NULL, - "mtd%dro", i); + if (partitioned) { + device_create(&mtd_class, mtd->dev.parent, MTD_DEVT(i) + 1, NULL, + "mtd%dro", i); + } - pr_debug("mtd: Giving out device %d to %s\n", i, mtd->name); + pr_debug("mtd: Giving out %spartitioned device %d to %s\n", + partitioned ? "" : "un-", i, mtd->name); /* No need to get a refcount on the module containing the notifier, since we hold the mtd_table_mutex */ list_for_each_entry(not, &mtd_notifiers, list) @@ -771,13 +813,16 @@ int add_mtd_device(struct mtd_info *mtd) mutex_unlock(&mtd_table_mutex); - if (of_property_read_bool(mtd_get_of_node(mtd), "linux,rootfs")) { - if (IS_BUILTIN(CONFIG_MTD)) { - pr_info("mtd: setting mtd%d (%s) as root device\n", mtd->index, mtd->name); - ROOT_DEV = MKDEV(MTD_BLOCK_MAJOR, mtd->index); - } else { - pr_warn("mtd: can't set mtd%d (%s) as root device - mtd must be builtin\n", - mtd->index, mtd->name); + if (partitioned) { + if (of_property_read_bool(mtd_get_of_node(mtd), "linux,rootfs")) { + if (IS_BUILTIN(CONFIG_MTD)) { + pr_info("mtd: setting mtd%d (%s) as root device\n", + mtd->index, mtd->name); + ROOT_DEV = MKDEV(MTD_BLOCK_MAJOR, mtd->index); + } else { + pr_warn("mtd: can't set mtd%d (%s) as root device - mtd must be builtin\n", + mtd->index, mtd->name); + } } } @@ -793,7 +838,10 @@ int add_mtd_device(struct mtd_info *mtd) fail_added: of_node_put(mtd_get_of_node(mtd)); fail_devname: - idr_remove(&mtd_idr, i); + if (partitioned) + idr_remove(&mtd_idr, i); + else + idr_remove(&mtd_master_idr, i); fail_locked: mutex_unlock(&mtd_table_mutex); return error; @@ -811,12 +859,14 @@ int add_mtd_device(struct mtd_info *mtd) int del_mtd_device(struct mtd_info *mtd) { - int ret; struct mtd_notifier *not; + struct idr *idr; + int ret; mutex_lock(&mtd_table_mutex); - if (idr_find(&mtd_idr, mtd->index) != mtd) { + idr = mtd->dev.class == &mtd_class ? &mtd_idr : &mtd_master_idr; + if (idr_find(idr, mtd->index) != mtd) { ret = -ENODEV; goto out_error; } @@ -1056,6 +1106,7 @@ int mtd_device_parse_register(struct mtd_info *mtd, const char * const *types, const struct mtd_partition *parts, int nr_parts) { + struct mtd_info *parent; int ret, err; mtd_set_dev_defaults(mtd); @@ -1064,25 +1115,30 @@ int mtd_device_parse_register(struct mtd_info *mtd, const char * const *types, if (ret) goto out; + ret = add_mtd_device(mtd, false); + if (ret) + goto out; + if (IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)) { - ret = add_mtd_device(mtd); + ret = mtd_add_partition(mtd, mtd->name, 0, MTDPART_SIZ_FULL, &parent); if (ret) goto out; + + } else { + parent = mtd; } /* Prefer parsed partitions over driver-provided fallback */ - ret = parse_mtd_partitions(mtd, types, parser_data); + ret = parse_mtd_partitions(parent, types, parser_data); if (ret == -EPROBE_DEFER) goto out; if (ret > 0) ret = 0; else if (nr_parts) - ret = add_mtd_partitions(mtd, parts, nr_parts); - else if (!device_is_registered(&mtd->dev)) - ret = add_mtd_device(mtd); - else - ret = 0; + ret = add_mtd_partitions(parent, parts, nr_parts); + else if (!IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)) + ret = mtd_add_partition(parent, mtd->name, 0, MTDPART_SIZ_FULL, NULL); if (ret) goto out; @@ -1102,13 +1158,14 @@ int mtd_device_parse_register(struct mtd_info *mtd, const char * const *types, register_reboot_notifier(&mtd->reboot_notifier); } + return 0; out: - if (ret) { - nvmem_unregister(mtd->otp_user_nvmem); - nvmem_unregister(mtd->otp_factory_nvmem); - } + nvmem_unregister(mtd->otp_user_nvmem); + nvmem_unregister(mtd->otp_factory_nvmem); - if (ret && device_is_registered(&mtd->dev)) { + del_mtd_partitions(mtd); + + if (device_is_registered(&mtd->dev)) { err = del_mtd_device(mtd); if (err) pr_err("Error when deleting MTD device (%d)\n", err); @@ -1267,8 +1324,7 @@ int __get_mtd_device(struct mtd_info *mtd) mtd = mtd->parent; } - if (IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)) - kref_get(&master->refcnt); + kref_get(&master->refcnt); return 0; } @@ -1362,8 +1418,7 @@ void __put_mtd_device(struct mtd_info *mtd) mtd = parent; } - if (IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)) - kref_put(&master->refcnt, mtd_device_release); + kref_put(&master->refcnt, mtd_device_release); module_put(master->owner); @@ -2530,6 +2585,16 @@ static int __init init_mtd(void) if (ret) goto err_reg; + ret = class_register(&mtd_master_class); + if (ret) + goto err_reg2; + + ret = alloc_chrdev_region(&mtd_master_devt, 0, MTD_MASTER_DEVS, "mtd_master"); + if (ret < 0) { + pr_err("unable to allocate char dev region\n"); + goto err_chrdev; + } + mtd_bdi = mtd_bdi_init("mtd"); if (IS_ERR(mtd_bdi)) { ret = PTR_ERR(mtd_bdi); @@ -2554,6 +2619,10 @@ static int __init init_mtd(void) bdi_unregister(mtd_bdi); bdi_put(mtd_bdi); err_bdi: + unregister_chrdev_region(mtd_master_devt, MTD_MASTER_DEVS); +err_chrdev: + class_unregister(&mtd_master_class); +err_reg2: class_unregister(&mtd_class); err_reg: pr_err("Error registering mtd class or bdi: %d\n", ret); @@ -2567,9 +2636,12 @@ static void __exit cleanup_mtd(void) if (proc_mtd) remove_proc_entry("mtd", NULL); class_unregister(&mtd_class); + class_unregister(&mtd_master_class); + unregister_chrdev_region(mtd_master_devt, MTD_MASTER_DEVS); bdi_unregister(mtd_bdi); bdi_put(mtd_bdi); idr_destroy(&mtd_idr); + idr_destroy(&mtd_master_idr); } module_init(init_mtd); diff --git a/drivers/mtd/mtdcore.h b/drivers/mtd/mtdcore.h index b014861a06a6..2258d31c5aa6 100644 --- a/drivers/mtd/mtdcore.h +++ b/drivers/mtd/mtdcore.h @@ -8,7 +8,7 @@ extern struct mutex mtd_table_mutex; extern struct backing_dev_info *mtd_bdi; struct mtd_info *__mtd_next_device(int i); -int __must_check add_mtd_device(struct mtd_info *mtd); +int __must_check add_mtd_device(struct mtd_info *mtd, bool partitioned); int del_mtd_device(struct mtd_info *mtd); int add_mtd_partitions(struct mtd_info *, const struct mtd_partition *, int); int del_mtd_partitions(struct mtd_info *); diff --git a/drivers/mtd/mtdpart.c b/drivers/mtd/mtdpart.c index 994e8c51e674..5a3db36d734e 100644 --- a/drivers/mtd/mtdpart.c +++ b/drivers/mtd/mtdpart.c @@ -86,8 +86,7 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent, * parent conditional on that option. Note, this is a way to * distinguish between the parent and its partitions in sysfs. */ - child->dev.parent = IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER) || mtd_is_partition(parent) ? - &parent->dev : parent->dev.parent; + child->dev.parent = &parent->dev; child->dev.of_node = part->of_node; child->parent = parent; child->part.offset = part->offset; @@ -243,7 +242,7 @@ static int mtd_add_partition_attrs(struct mtd_info *new) } int mtd_add_partition(struct mtd_info *parent, const char *name, - long long offset, long long length) + long long offset, long long length, struct mtd_info **out) { struct mtd_info *master = mtd_get_master(parent); u64 parent_size = mtd_is_partition(parent) ? @@ -276,12 +275,15 @@ int mtd_add_partition(struct mtd_info *parent, const char *name, list_add_tail(&child->part.node, &parent->partitions); mutex_unlock(&master->master.partitions_lock); - ret = add_mtd_device(child); + ret = add_mtd_device(child, true); if (ret) goto err_remove_part; mtd_add_partition_attrs(child); + if (out) + *out = child; + return 0; err_remove_part: @@ -413,7 +415,7 @@ int add_mtd_partitions(struct mtd_info *parent, list_add_tail(&child->part.node, &parent->partitions); mutex_unlock(&master->master.partitions_lock); - ret = add_mtd_device(child); + ret = add_mtd_device(child, true); if (ret) { mutex_lock(&master->master.partitions_lock); list_del(&child->part.node); @@ -590,9 +592,6 @@ static int mtd_part_of_parse(struct mtd_info *master, int ret, err = 0; dev = &master->dev; - /* Use parent device (controller) if the top level MTD is not registered */ - if (!IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER) && !mtd_is_partition(master)) - dev = master->dev.parent; np = mtd_get_of_node(master); if (mtd_is_partition(master)) @@ -711,6 +710,7 @@ int parse_mtd_partitions(struct mtd_info *master, const char *const *types, if (ret < 0 && !err) err = ret; } + return err; } diff --git a/include/linux/mtd/partitions.h b/include/linux/mtd/partitions.h index b74a539ec581..5daf80df9e89 100644 --- a/include/linux/mtd/partitions.h +++ b/include/linux/mtd/partitions.h @@ -108,7 +108,7 @@ extern void deregister_mtd_parser(struct mtd_part_parser *parser); deregister_mtd_parser) int mtd_add_partition(struct mtd_info *master, const char *name, - long long offset, long long length); + long long offset, long long length, struct mtd_info **part); int mtd_del_partition(struct mtd_info *master, int partno); uint64_t mtd_get_device_size(const struct mtd_info *mtd); From patchwork Mon Apr 14 09:37:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 14050062 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 14B73C369B2 for ; Mon, 14 Apr 2025 09:49:28 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A515310E51B; Mon, 14 Apr 2025 09:49:27 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="ZMqfVhQ9"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4690810E51B; Mon, 14 Apr 2025 09:49:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744624167; x=1776160167; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3/hTDi2F+AW7vI8uPGYluEvnRCjR8QEBwsqAK4AjFTA=; b=ZMqfVhQ9jU0iDQPbPTpF0PX6TGRj2gKJB0m3UAaCVbfjFevMHbUkgGJH 1pSKUiJwy0y8T8eztElbfu7sZkpKKRhrz25wNE7CZdmlPZ88pQGACveIn nJL+6pmtkThzH102dPX9WEECriOWFQeyQ0+XGq3HbOOtDcJtknPvjECR+ //Pq6oHpULHEWJa/WKyamFIGw6F1rDbhnZlJgdrAYzM+iZtHyfDXGWAZl 1i/DBfEx/BcG9Fh/D3EDMx/jhA0QDe43wONei7Oh2VHO7xMZI8Rsn8gxS Lf+FSBOdUt5xh+fu0+f1S31/3VJWzU3a+i9tO8UOKXHXOShfroJcnehNl A==; X-CSE-ConnectionGUID: Zg6K5adWR7WGMmaCRxH3XQ== X-CSE-MsgGUID: xuyYugBATqKHPhHnwMw4rg== X-IronPort-AV: E=McAfee;i="6700,10204,11402"; a="45222636" X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="45222636" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:49:27 -0700 X-CSE-ConnectionGUID: NLgwFse2S7akAZmcQlwmDA== X-CSE-MsgGUID: NPRdBFV0RUKXasfWUjavZQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="152954857" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:49:21 -0700 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alexander Usyskin , Tomas Winkler Subject: [PATCH v8 02/12] mtd: add driver for intel graphics non-volatile memory device Date: Mon, 14 Apr 2025 12:37:53 +0300 Message-ID: <20250414093803.2133463-3-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250414093803.2133463-1-alexander.usyskin@intel.com> References: <20250414093803.2133463-1-alexander.usyskin@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Add auxiliary driver for intel discrete graphics non-volatile memory device. CC: Lucas De Marchi Reviewed-by: Rodrigo Vivi Acked-by: Miquel Raynal Co-developed-by: Tomas Winkler Signed-off-by: Tomas Winkler Signed-off-by: Alexander Usyskin --- MAINTAINERS | 7 ++ drivers/mtd/devices/Kconfig | 11 +++ drivers/mtd/devices/Makefile | 1 + drivers/mtd/devices/mtd_intel_dg.c | 138 +++++++++++++++++++++++++++++ include/linux/intel_dg_nvm_aux.h | 27 ++++++ 5 files changed, 184 insertions(+) create mode 100644 drivers/mtd/devices/mtd_intel_dg.c create mode 100644 include/linux/intel_dg_nvm_aux.h diff --git a/MAINTAINERS b/MAINTAINERS index 251b1f3d0e92..b349e6c01acf 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -11833,6 +11833,13 @@ L: linux-kernel@vger.kernel.org S: Supported F: arch/x86/include/asm/intel-family.h +INTEL DISCRETE GRAPHICS NVM MTD DRIVER +M: Alexander Usyskin +L: linux-mtd@lists.infradead.org +S: Supported +F: drivers/mtd/devices/mtd_intel_dg.c +F: include/linux/intel_dg_nvm_aux.h + INTEL DRM DISPLAY FOR XE AND I915 DRIVERS M: Jani Nikula M: Rodrigo Vivi diff --git a/drivers/mtd/devices/Kconfig b/drivers/mtd/devices/Kconfig index ff2f9e55ef28..59be6d3f0d32 100644 --- a/drivers/mtd/devices/Kconfig +++ b/drivers/mtd/devices/Kconfig @@ -183,6 +183,17 @@ config MTD_POWERNV_FLASH platforms from Linux. This device abstracts away the firmware interface for flash access. +config MTD_INTEL_DG + tristate "Intel Discrete Graphics non-volatile memory driver" + depends on AUXILIARY_BUS + depends on MTD + help + This provides an MTD device to access Intel Discrete Graphics + non-volatile memory. + + To compile this driver as a module, choose M here: the module + will be called mtd-intel-dg. + comment "Disk-On-Chip Device Drivers" config MTD_DOCG3 diff --git a/drivers/mtd/devices/Makefile b/drivers/mtd/devices/Makefile index d11eb2b8b6f8..9fe4ce9cffde 100644 --- a/drivers/mtd/devices/Makefile +++ b/drivers/mtd/devices/Makefile @@ -18,6 +18,7 @@ obj-$(CONFIG_MTD_SST25L) += sst25l.o obj-$(CONFIG_MTD_BCM47XXSFLASH) += bcm47xxsflash.o obj-$(CONFIG_MTD_ST_SPI_FSM) += st_spi_fsm.o obj-$(CONFIG_MTD_POWERNV_FLASH) += powernv_flash.o +obj-$(CONFIG_MTD_INTEL_DG) += mtd_intel_dg.o CFLAGS_docg3.o += -I$(src) diff --git a/drivers/mtd/devices/mtd_intel_dg.c b/drivers/mtd/devices/mtd_intel_dg.c new file mode 100644 index 000000000000..963a88cacc6c --- /dev/null +++ b/drivers/mtd/devices/mtd_intel_dg.c @@ -0,0 +1,138 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright(c) 2019-2025, Intel Corporation. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +struct intel_dg_nvm { + struct kref refcnt; + void __iomem *base; + size_t size; + unsigned int nregions; + struct { + const char *name; + u8 id; + u64 offset; + u64 size; + } regions[] __counted_by(nregions); +}; + +static void intel_dg_nvm_release(struct kref *kref) +{ + struct intel_dg_nvm *nvm = container_of(kref, struct intel_dg_nvm, refcnt); + int i; + + pr_debug("freeing intel_dg nvm\n"); + for (i = 0; i < nvm->nregions; i++) + kfree(nvm->regions[i].name); + kfree(nvm); +} + +static int intel_dg_mtd_probe(struct auxiliary_device *aux_dev, + const struct auxiliary_device_id *aux_dev_id) +{ + struct intel_dg_nvm_dev *invm = auxiliary_dev_to_intel_dg_nvm_dev(aux_dev); + struct device *device; + struct intel_dg_nvm *nvm; + unsigned int nregions; + unsigned int i, n; + char *name; + int ret; + + device = &aux_dev->dev; + + /* count available regions */ + for (nregions = 0, i = 0; i < INTEL_DG_NVM_REGIONS; i++) { + if (invm->regions[i].name) + nregions++; + } + + if (!nregions) { + dev_err(device, "no regions defined\n"); + return -ENODEV; + } + + nvm = kzalloc(struct_size(nvm, regions, nregions), GFP_KERNEL); + if (!nvm) + return -ENOMEM; + + kref_init(&nvm->refcnt); + + nvm->nregions = nregions; + for (n = 0, i = 0; i < INTEL_DG_NVM_REGIONS; i++) { + if (!invm->regions[i].name) + continue; + + name = kasprintf(GFP_KERNEL, "%s.%s", + dev_name(&aux_dev->dev), invm->regions[i].name); + if (!name) + continue; + nvm->regions[n].name = name; + nvm->regions[n].id = i; + n++; + } + nvm->nregions = n; /* in case where kasprintf fail */ + + nvm->base = devm_ioremap_resource(device, &invm->bar); + if (IS_ERR(nvm->base)) { + dev_err(device, "mmio not mapped\n"); + ret = PTR_ERR(nvm->base); + goto err; + } + + dev_set_drvdata(&aux_dev->dev, nvm); + + return 0; + +err: + kref_put(&nvm->refcnt, intel_dg_nvm_release); + return ret; +} + +static void intel_dg_mtd_remove(struct auxiliary_device *aux_dev) +{ + struct intel_dg_nvm *nvm = dev_get_drvdata(&aux_dev->dev); + + if (!nvm) + return; + + dev_set_drvdata(&aux_dev->dev, NULL); + + kref_put(&nvm->refcnt, intel_dg_nvm_release); +} + +static const struct auxiliary_device_id intel_dg_mtd_id_table[] = { + { + .name = "i915.nvm", + }, + { + .name = "xe.nvm", + }, + { + /* sentinel */ + } +}; +MODULE_DEVICE_TABLE(auxiliary, intel_dg_mtd_id_table); + +static struct auxiliary_driver intel_dg_mtd_driver = { + .probe = intel_dg_mtd_probe, + .remove = intel_dg_mtd_remove, + .driver = { + /* auxiliary_driver_register() sets .name to be the modname */ + }, + .id_table = intel_dg_mtd_id_table +}; + +module_auxiliary_driver(intel_dg_mtd_driver); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Intel Corporation"); +MODULE_DESCRIPTION("Intel DGFX MTD driver"); diff --git a/include/linux/intel_dg_nvm_aux.h b/include/linux/intel_dg_nvm_aux.h new file mode 100644 index 000000000000..68df634c994c --- /dev/null +++ b/include/linux/intel_dg_nvm_aux.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright(c) 2019-2025, Intel Corporation. All rights reserved. + */ + +#ifndef __INTEL_DG_NVM_AUX_H__ +#define __INTEL_DG_NVM_AUX_H__ + +#include + +#define INTEL_DG_NVM_REGIONS 13 + +struct intel_dg_nvm_region { + const char *name; +}; + +struct intel_dg_nvm_dev { + struct auxiliary_device aux_dev; + bool writable_override; + struct resource bar; + const struct intel_dg_nvm_region *regions; +}; + +#define auxiliary_dev_to_intel_dg_nvm_dev(auxiliary_dev) \ + container_of(auxiliary_dev, struct intel_dg_nvm_dev, aux_dev) + +#endif /* __INTEL_DG_NVM_AUX_H__ */ From patchwork Mon Apr 14 09:37:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 14050063 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B12EDC369B6 for ; Mon, 14 Apr 2025 09:49:34 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 43B0910E518; Mon, 14 Apr 2025 09:49:34 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="a/CkO73y"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0452010E519; Mon, 14 Apr 2025 09:49:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744624173; x=1776160173; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=a0B7fMkJxt2Wb7bBeMVZwsKjEvvl5x89e9KqNreoDzc=; b=a/CkO73yOy9Zn7QNBw8nE4e7MIpW/ikwyPMduC58JyVbDPcEtgA+VVgJ oQDQ8sXKh+8kJwZ2ndNydMRK6emmEbsCnIYt4AlBUqQAPJm3i/H0A5TRp srofX/WvdnRyrY75Q5w9T1Sycac5Z5nn7jNSBeXuGUU0X1C6rCSlIsNXO cmxyexzGN2F7brbOPA6rc7Lhk7ZNuSSS+12FV9a+wSGS1lbfYDS6czo4L cqSztgILpqxGZjJHbnLeEbefLCAl6e1YbUPKANwDXTdrisVb04NJWKUM2 O08aaSNSiaTAmT14Z3EdYKDJg2yCyP8Y0hWhb/uIrnGuO+WXcxd5DXEYT A==; X-CSE-ConnectionGUID: TehsAF+aQEecdfadRkNxiA== X-CSE-MsgGUID: OBUMMrxwSEG0GaUoUi6YPg== X-IronPort-AV: E=McAfee;i="6700,10204,11402"; a="45222652" X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="45222652" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:49:32 -0700 X-CSE-ConnectionGUID: SjRXcjm0S3W/nCb+dQyXUA== X-CSE-MsgGUID: HfkOO8VdQwOWHNJkimIZWw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="152954895" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:49:27 -0700 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alexander Usyskin , Tomas Winkler Subject: [PATCH v8 03/12] mtd: intel-dg: implement region enumeration Date: Mon, 14 Apr 2025 12:37:54 +0300 Message-ID: <20250414093803.2133463-4-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250414093803.2133463-1-alexander.usyskin@intel.com> References: <20250414093803.2133463-1-alexander.usyskin@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" In intel-dg, there is no access to the spi controller, the information is extracted from the descriptor region. CC: Lucas De Marchi Reviewed-by: Rodrigo Vivi Acked-by: Miquel Raynal Co-developed-by: Tomas Winkler Signed-off-by: Tomas Winkler Signed-off-by: Alexander Usyskin --- drivers/mtd/devices/mtd_intel_dg.c | 198 +++++++++++++++++++++++++++++ 1 file changed, 198 insertions(+) diff --git a/drivers/mtd/devices/mtd_intel_dg.c b/drivers/mtd/devices/mtd_intel_dg.c index 963a88cacc6c..ba1c720e717b 100644 --- a/drivers/mtd/devices/mtd_intel_dg.c +++ b/drivers/mtd/devices/mtd_intel_dg.c @@ -3,6 +3,8 @@ * Copyright(c) 2019-2025, Intel Corporation. All rights reserved. */ +#include +#include #include #include #include @@ -22,9 +24,199 @@ struct intel_dg_nvm { u8 id; u64 offset; u64 size; + unsigned int is_readable:1; + unsigned int is_writable:1; } regions[] __counted_by(nregions); }; +#define NVM_TRIGGER_REG 0x00000000 +#define NVM_VALSIG_REG 0x00000010 +#define NVM_ADDRESS_REG 0x00000040 +#define NVM_REGION_ID_REG 0x00000044 +/* + * [15:0]-Erase size = 0x0010 4K 0x0080 32K 0x0100 64K + * [23:16]-Reserved + * [31:24]-Erase MEM RegionID + */ +#define NVM_ERASE_REG 0x00000048 +#define NVM_ACCESS_ERROR_REG 0x00000070 +#define NVM_ADDRESS_ERROR_REG 0x00000074 + +/* Flash Valid Signature */ +#define NVM_FLVALSIG 0x0FF0A55A + +#define NVM_MAP_ADDR_MASK GENMASK(7, 0) +#define NVM_MAP_ADDR_SHIFT 0x00000004 + +#define NVM_REGION_ID_DESCRIPTOR 0 +/* Flash Region Base Address */ +#define NVM_FRBA 0x40 +/* Flash Region __n - Flash Descriptor Record */ +#define NVM_FLREG(__n) (NVM_FRBA + ((__n) * 4)) +/* Flash Map 1 Register */ +#define NVM_FLMAP1_REG 0x18 +#define NVM_FLMSTR4_OFFSET 0x00C + +#define NVM_ACCESS_ERROR_PCIE_MASK 0x7 + +#define NVM_FREG_BASE_MASK GENMASK(15, 0) +#define NVM_FREG_ADDR_MASK GENMASK(31, 16) +#define NVM_FREG_ADDR_SHIFT 12 +#define NVM_FREG_MIN_REGION_SIZE 0xFFF + +static inline void idg_nvm_set_region_id(struct intel_dg_nvm *nvm, u8 region) +{ + iowrite32((u32)region, nvm->base + NVM_REGION_ID_REG); +} + +static inline u32 idg_nvm_error(struct intel_dg_nvm *nvm) +{ + void __iomem *base = nvm->base; + + u32 reg = ioread32(base + NVM_ACCESS_ERROR_REG) & NVM_ACCESS_ERROR_PCIE_MASK; + + /* reset error bits */ + if (reg) + iowrite32(reg, base + NVM_ACCESS_ERROR_REG); + + return reg; +} + +static inline u32 idg_nvm_read32(struct intel_dg_nvm *nvm, u32 address) +{ + void __iomem *base = nvm->base; + + iowrite32(address, base + NVM_ADDRESS_REG); + + return ioread32(base + NVM_TRIGGER_REG); +} + +static int idg_nvm_get_access_map(struct intel_dg_nvm *nvm, u32 *access_map) +{ + u32 flmap1; + u32 fmba; + u32 fmstr4; + u32 fmstr4_addr; + + idg_nvm_set_region_id(nvm, NVM_REGION_ID_DESCRIPTOR); + + flmap1 = idg_nvm_read32(nvm, NVM_FLMAP1_REG); + if (idg_nvm_error(nvm)) + return -EIO; + /* Get Flash Master Baser Address (FMBA) */ + fmba = (FIELD_GET(NVM_MAP_ADDR_MASK, flmap1) << NVM_MAP_ADDR_SHIFT); + fmstr4_addr = fmba + NVM_FLMSTR4_OFFSET; + + fmstr4 = idg_nvm_read32(nvm, fmstr4_addr); + if (idg_nvm_error(nvm)) + return -EIO; + + *access_map = fmstr4; + return 0; +} + +static bool idg_nvm_region_readable(u32 access_map, u8 region) +{ + if (region < 12) + return access_map & BIT(region + 8); /* [19:8] */ + else + return access_map & BIT(region - 12); /* [3:0] */ +} + +static bool idg_nvm_region_writable(u32 access_map, u8 region) +{ + if (region < 12) + return access_map & BIT(region + 20); /* [31:20] */ + else + return access_map & BIT(region - 8); /* [7:4] */ +} + +static int idg_nvm_is_valid(struct intel_dg_nvm *nvm) +{ + u32 is_valid; + + idg_nvm_set_region_id(nvm, NVM_REGION_ID_DESCRIPTOR); + + is_valid = idg_nvm_read32(nvm, NVM_VALSIG_REG); + if (idg_nvm_error(nvm)) + return -EIO; + + if (is_valid != NVM_FLVALSIG) + return -ENODEV; + + return 0; +} + +static int intel_dg_nvm_init(struct intel_dg_nvm *nvm, struct device *device) +{ + int ret; + unsigned int i, n; + u32 access_map = 0; + + /* clean error register, previous errors are ignored */ + idg_nvm_error(nvm); + + ret = idg_nvm_is_valid(nvm); + if (ret) { + dev_err(device, "The MEM is not valid %d\n", ret); + return ret; + } + + if (idg_nvm_get_access_map(nvm, &access_map)) + return -EIO; + + for (i = 0, n = 0; i < nvm->nregions; i++) { + u32 address, base, limit, region; + u8 id = nvm->regions[i].id; + + address = NVM_FLREG(id); + region = idg_nvm_read32(nvm, address); + + base = FIELD_GET(NVM_FREG_BASE_MASK, region) << NVM_FREG_ADDR_SHIFT; + limit = (FIELD_GET(NVM_FREG_ADDR_MASK, region) << NVM_FREG_ADDR_SHIFT) | + NVM_FREG_MIN_REGION_SIZE; + + dev_dbg(device, "[%d] %s: region: 0x%08X base: 0x%08x limit: 0x%08x\n", + id, nvm->regions[i].name, region, base, limit); + + if (base >= limit || (i > 0 && limit == 0)) { + dev_dbg(device, "[%d] %s: disabled\n", + id, nvm->regions[i].name); + nvm->regions[i].is_readable = 0; + continue; + } + + if (nvm->size < limit) + nvm->size = limit; + + nvm->regions[i].offset = base; + nvm->regions[i].size = limit - base + 1; + /* No write access to descriptor; mask it out*/ + nvm->regions[i].is_writable = idg_nvm_region_writable(access_map, id); + + nvm->regions[i].is_readable = idg_nvm_region_readable(access_map, id); + dev_dbg(device, "Registered, %s id=%d offset=%lld size=%lld rd=%d wr=%d\n", + nvm->regions[i].name, + nvm->regions[i].id, + nvm->regions[i].offset, + nvm->regions[i].size, + nvm->regions[i].is_readable, + nvm->regions[i].is_writable); + + if (nvm->regions[i].is_readable) + n++; + } + + dev_dbg(device, "Registered %d regions\n", n); + + /* Need to add 1 to the amount of memory + * so it is reported as an even block + */ + nvm->size += 1; + + return n; +} + static void intel_dg_nvm_release(struct kref *kref) { struct intel_dg_nvm *nvm = container_of(kref, struct intel_dg_nvm, refcnt); @@ -88,6 +280,12 @@ static int intel_dg_mtd_probe(struct auxiliary_device *aux_dev, goto err; } + ret = intel_dg_nvm_init(nvm, device); + if (ret < 0) { + dev_err(device, "cannot initialize nvm %d\n", ret); + goto err; + } + dev_set_drvdata(&aux_dev->dev, nvm); return 0; From patchwork Mon Apr 14 09:37:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 14050064 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D5554C369B6 for ; Mon, 14 Apr 2025 09:49:40 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 74F4410E51D; Mon, 14 Apr 2025 09:49:40 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="gNw2uYQb"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0635010E51E; Mon, 14 Apr 2025 09:49:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744624179; x=1776160179; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lVtZQbsxfX/iqDLELTpbrgDvP7eZNLkxpNu2nwCF3LE=; b=gNw2uYQbQBtBXCRFtBDhBUu8T7OSYA0RANsdpKdx52xe/DN9XM++Tlse lDKENgpMYBefzjwh4oE9+9MtD1PYEJN9Ma9B2ctimiGt8lx5EP+/IuWBz PDDazwiTYnZpdviyLJT8anYIDbT7BiZka1meWilbrtLxg2ORrPyCAG6Tt ALtFjL72JdKxxHxodvkgiDa9d7XNDsjW9oSM8x0lEcgURtDjEZMiGwkEk RIbRuFB+GgNw2gC7aIHtMxi+6fySd2zwY7GueO3SBh+8CllSIustMnN5j x0EkLACyiWi+DzqWCkOQK+1wPRt2wVA2vpjE6WO+Qbj9qba1A1jo4m/t6 A==; X-CSE-ConnectionGUID: edw5dNHhSUSIy7cNFLTLzQ== X-CSE-MsgGUID: OV6luRpgRpeUkRco43HRaQ== X-IronPort-AV: E=McAfee;i="6700,10204,11402"; a="45222680" X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="45222680" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:49:38 -0700 X-CSE-ConnectionGUID: KKkUy+/5TH6uXDQY/V7+Og== X-CSE-MsgGUID: svbGhzjQTJWQByeoANGkUQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="152954922" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:49:32 -0700 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alexander Usyskin , Tomas Winkler , Vitaly Lubart Subject: [PATCH v8 04/12] mtd: intel-dg: implement access functions Date: Mon, 14 Apr 2025 12:37:55 +0300 Message-ID: <20250414093803.2133463-5-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250414093803.2133463-1-alexander.usyskin@intel.com> References: <20250414093803.2133463-1-alexander.usyskin@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Implement read(), erase() and write() functions. CC: Lucas De Marchi CC: Rodrigo Vivi Acked-by: Miquel Raynal Co-developed-by: Tomas Winkler Signed-off-by: Tomas Winkler Co-developed-by: Vitaly Lubart Signed-off-by: Vitaly Lubart Signed-off-by: Alexander Usyskin --- drivers/mtd/devices/mtd_intel_dg.c | 197 +++++++++++++++++++++++++++++ 1 file changed, 197 insertions(+) diff --git a/drivers/mtd/devices/mtd_intel_dg.c b/drivers/mtd/devices/mtd_intel_dg.c index ba1c720e717b..6f67cf966d05 100644 --- a/drivers/mtd/devices/mtd_intel_dg.c +++ b/drivers/mtd/devices/mtd_intel_dg.c @@ -5,13 +5,16 @@ #include #include +#include #include #include #include +#include #include #include #include #include +#include #include struct intel_dg_nvm { @@ -91,6 +94,33 @@ static inline u32 idg_nvm_read32(struct intel_dg_nvm *nvm, u32 address) return ioread32(base + NVM_TRIGGER_REG); } +static inline u64 idg_nvm_read64(struct intel_dg_nvm *nvm, u32 address) +{ + void __iomem *base = nvm->base; + + iowrite32(address, base + NVM_ADDRESS_REG); + + return readq(base + NVM_TRIGGER_REG); +} + +static void idg_nvm_write32(struct intel_dg_nvm *nvm, u32 address, u32 data) +{ + void __iomem *base = nvm->base; + + iowrite32(address, base + NVM_ADDRESS_REG); + + iowrite32(data, base + NVM_TRIGGER_REG); +} + +static void idg_nvm_write64(struct intel_dg_nvm *nvm, u32 address, u64 data) +{ + void __iomem *base = nvm->base; + + iowrite32(address, base + NVM_ADDRESS_REG); + + writeq(data, base + NVM_TRIGGER_REG); +} + static int idg_nvm_get_access_map(struct intel_dg_nvm *nvm, u32 *access_map) { u32 flmap1; @@ -147,6 +177,173 @@ static int idg_nvm_is_valid(struct intel_dg_nvm *nvm) return 0; } +__maybe_unused +static unsigned int idg_nvm_get_region(const struct intel_dg_nvm *nvm, loff_t from) +{ + unsigned int i; + + for (i = 0; i < nvm->nregions; i++) { + if ((nvm->regions[i].offset + nvm->regions[i].size - 1) > from && + nvm->regions[i].offset <= from && + nvm->regions[i].size != 0) + break; + } + + return i; +} + +static ssize_t idg_nvm_rewrite_partial(struct intel_dg_nvm *nvm, loff_t to, + loff_t offset, size_t len, const u32 *newdata) +{ + u32 data = idg_nvm_read32(nvm, to); + + if (idg_nvm_error(nvm)) + return -EIO; + + memcpy((u8 *)&data + offset, newdata, len); + + idg_nvm_write32(nvm, to, data); + if (idg_nvm_error(nvm)) + return -EIO; + + return len; +} + +__maybe_unused +static ssize_t idg_write(struct intel_dg_nvm *nvm, u8 region, + loff_t to, size_t len, const unsigned char *buf) +{ + size_t i; + size_t len8; + size_t len4; + size_t to4; + size_t to_shift; + size_t len_s = len; + ssize_t ret; + + idg_nvm_set_region_id(nvm, region); + + to4 = ALIGN_DOWN(to, sizeof(u32)); + to_shift = min(sizeof(u32) - ((size_t)to - to4), len); + if (to - to4) { + ret = idg_nvm_rewrite_partial(nvm, to4, to - to4, to_shift, (uint32_t *)&buf[0]); + if (ret < 0) + return ret; + + buf += to_shift; + to += to_shift; + len_s -= to_shift; + } + + len8 = ALIGN_DOWN(len_s, sizeof(u64)); + for (i = 0; i < len8; i += sizeof(u64)) { + u64 data; + + memcpy(&data, &buf[i], sizeof(u64)); + idg_nvm_write64(nvm, to + i, data); + if (idg_nvm_error(nvm)) + return -EIO; + } + + len4 = len_s - len8; + if (len4 >= sizeof(u32)) { + u32 data; + + memcpy(&data, &buf[i], sizeof(u32)); + idg_nvm_write32(nvm, to + i, data); + if (idg_nvm_error(nvm)) + return -EIO; + i += sizeof(u32); + len4 -= sizeof(u32); + } + + if (len4 > 0) { + ret = idg_nvm_rewrite_partial(nvm, to + i, 0, len4, (uint32_t *)&buf[i]); + if (ret < 0) + return ret; + } + + return len; +} + +__maybe_unused +static ssize_t idg_read(struct intel_dg_nvm *nvm, u8 region, + loff_t from, size_t len, unsigned char *buf) +{ + size_t i; + size_t len8; + size_t len4; + size_t from4; + size_t from_shift; + size_t len_s = len; + + idg_nvm_set_region_id(nvm, region); + + from4 = ALIGN_DOWN(from, sizeof(u32)); + from_shift = min(sizeof(u32) - ((size_t)from - from4), len); + + if (from - from4) { + u32 data = idg_nvm_read32(nvm, from4); + + if (idg_nvm_error(nvm)) + return -EIO; + memcpy(&buf[0], (u8 *)&data + (from - from4), from_shift); + len_s -= from_shift; + buf += from_shift; + from += from_shift; + } + + len8 = ALIGN_DOWN(len_s, sizeof(u64)); + for (i = 0; i < len8; i += sizeof(u64)) { + u64 data = idg_nvm_read64(nvm, from + i); + + if (idg_nvm_error(nvm)) + return -EIO; + + memcpy(&buf[i], &data, sizeof(data)); + } + + len4 = len_s - len8; + if (len4 >= sizeof(u32)) { + u32 data = idg_nvm_read32(nvm, from + i); + + if (idg_nvm_error(nvm)) + return -EIO; + memcpy(&buf[i], &data, sizeof(data)); + i += sizeof(u32); + len4 -= sizeof(u32); + } + + if (len4 > 0) { + u32 data = idg_nvm_read32(nvm, from + i); + + if (idg_nvm_error(nvm)) + return -EIO; + memcpy(&buf[i], &data, len4); + } + + return len; +} + +__maybe_unused +static ssize_t +idg_erase(struct intel_dg_nvm *nvm, u8 region, loff_t from, u64 len, u64 *fail_addr) +{ + u64 i; + const u32 block = 0x10; + void __iomem *base = nvm->base; + + for (i = 0; i < len; i += SZ_4K) { + iowrite32(from + i, base + NVM_ADDRESS_REG); + iowrite32(region << 24 | block, base + NVM_ERASE_REG); + /* Since the writes are via sguint + * we cannot do back to back erases. + */ + msleep(50); + } + return len; +} + static int intel_dg_nvm_init(struct intel_dg_nvm *nvm, struct device *device) { int ret; From patchwork Mon Apr 14 09:37:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 14050065 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 77658C369B2 for ; Mon, 14 Apr 2025 09:50:00 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0AB5110E513; Mon, 14 Apr 2025 09:50:00 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="kpaugnsG"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) by gabe.freedesktop.org (Postfix) with ESMTPS id BB3A810E50D; Mon, 14 Apr 2025 09:49:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744624199; x=1776160199; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9ukI5iPUv8J84CoV1OEbUeSFBHIZ7pB9cPVw1XNT34Y=; b=kpaugnsGC6wyHIo5EIPX3iyr5vMMmp3vJiDzJ8mSNk37yxUyly15bLXo adl3k5+SJHs36yUQgkP+O8OAH4pKbXT/SjAc2O+3DnJ7i6pM7+YYzHSd4 tCVfM5QofbmDoJi2U9SmjF0EBecGXeXY8ZAM/VOboaxyISVdk8GN0UQSQ oCGfKZ8pA8iEV6pkgnXMi7Cadeqd/jEvBtT7EnnQNOZ4nxWGDSjHSuqrk eQR/vo170XnlaAliWRB3XSSHzeV+cyHlTzvYv/MvnJppsL7nxPoXzozMP Vve/AF1N9MQSQe/XH0M3AGmilVdCJLjz6+rfxe42IMyCviMfXHoai8xw6 w==; X-CSE-ConnectionGUID: eoOHw7eDT42Nh2qtcbEPrg== X-CSE-MsgGUID: cAcTmwEcQLy3YQKo3cnERA== X-IronPort-AV: E=McAfee;i="6700,10204,11402"; a="45222754" X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="45222754" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:49:45 -0700 X-CSE-ConnectionGUID: oxngW+g7SgKU5O2eFAjHPw== X-CSE-MsgGUID: B/x0YIgKTz2xGpBpNIyWiA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="152954945" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:49:38 -0700 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alexander Usyskin , Tomas Winkler , Vitaly Lubart Subject: [PATCH v8 05/12] mtd: intel-dg: register with mtd Date: Mon, 14 Apr 2025 12:37:56 +0300 Message-ID: <20250414093803.2133463-6-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250414093803.2133463-1-alexander.usyskin@intel.com> References: <20250414093803.2133463-1-alexander.usyskin@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Register the on-die nvm device with the mtd subsystem. Refcount nvm object on _get and _put mtd callbacks. For erase operation address and size should be 4K aligned. For write operation address and size has to be 4bytes aligned. CC: Rodrigo Vivi CC: Lucas De Marchi Acked-by: Miquel Raynal Co-developed-by: Tomas Winkler Signed-off-by: Tomas Winkler Co-developed-by: Vitaly Lubart Signed-off-by: Vitaly Lubart Signed-off-by: Alexander Usyskin --- drivers/mtd/devices/mtd_intel_dg.c | 230 ++++++++++++++++++++++++++++- 1 file changed, 226 insertions(+), 4 deletions(-) diff --git a/drivers/mtd/devices/mtd_intel_dg.c b/drivers/mtd/devices/mtd_intel_dg.c index 6f67cf966d05..4023f2ebc344 100644 --- a/drivers/mtd/devices/mtd_intel_dg.c +++ b/drivers/mtd/devices/mtd_intel_dg.c @@ -5,6 +5,7 @@ #include #include +#include #include #include #include @@ -12,6 +13,8 @@ #include #include #include +#include +#include #include #include #include @@ -19,6 +22,8 @@ struct intel_dg_nvm { struct kref refcnt; + struct mtd_info mtd; + struct mutex lock; /* region access lock */ void __iomem *base; size_t size; unsigned int nregions; @@ -177,7 +182,6 @@ static int idg_nvm_is_valid(struct intel_dg_nvm *nvm) return 0; } -__maybe_unused static unsigned int idg_nvm_get_region(const struct intel_dg_nvm *nvm, loff_t from) { unsigned int i; @@ -209,7 +213,6 @@ static ssize_t idg_nvm_rewrite_partial(struct intel_dg_nvm *nvm, loff_t to, return len; } -__maybe_unused static ssize_t idg_write(struct intel_dg_nvm *nvm, u8 region, loff_t to, size_t len, const unsigned char *buf) { @@ -266,7 +269,6 @@ static ssize_t idg_write(struct intel_dg_nvm *nvm, u8 region, return len; } -__maybe_unused static ssize_t idg_read(struct intel_dg_nvm *nvm, u8 region, loff_t from, size_t len, unsigned char *buf) { @@ -325,7 +327,6 @@ static ssize_t idg_read(struct intel_dg_nvm *nvm, u8 region, return len; } -__maybe_unused static ssize_t idg_erase(struct intel_dg_nvm *nvm, u8 region, loff_t from, u64 len, u64 *fail_addr) { @@ -414,6 +415,147 @@ static int intel_dg_nvm_init(struct intel_dg_nvm *nvm, struct device *device) return n; } +static int intel_dg_mtd_erase(struct mtd_info *mtd, struct erase_info *info) +{ + struct intel_dg_nvm *nvm = mtd->priv; + unsigned int idx; + u8 region; + u64 addr; + ssize_t bytes; + loff_t from; + size_t len; + size_t total_len; + + if (WARN_ON(!nvm)) + return -EINVAL; + + if (!IS_ALIGNED(info->addr, SZ_4K) || !IS_ALIGNED(info->len, SZ_4K)) { + dev_err(&mtd->dev, "unaligned erase %llx %llx\n", + info->addr, info->len); + info->fail_addr = MTD_FAIL_ADDR_UNKNOWN; + return -EINVAL; + } + + total_len = info->len; + addr = info->addr; + + guard(mutex)(&nvm->lock); + + while (total_len > 0) { + if (!IS_ALIGNED(addr, SZ_4K) || !IS_ALIGNED(total_len, SZ_4K)) { + dev_err(&mtd->dev, "unaligned erase %llx %zx\n", addr, total_len); + info->fail_addr = addr; + return -ERANGE; + } + + idx = idg_nvm_get_region(nvm, addr); + if (idx >= nvm->nregions) { + dev_err(&mtd->dev, "out of range"); + info->fail_addr = MTD_FAIL_ADDR_UNKNOWN; + return -ERANGE; + } + + from = addr - nvm->regions[idx].offset; + region = nvm->regions[idx].id; + len = total_len; + if (len > nvm->regions[idx].size - from) + len = nvm->regions[idx].size - from; + + dev_dbg(&mtd->dev, "erasing region[%d] %s from %llx len %zx\n", + region, nvm->regions[idx].name, from, len); + + bytes = idg_erase(nvm, region, from, len, &info->fail_addr); + if (bytes < 0) { + dev_dbg(&mtd->dev, "erase failed with %zd\n", bytes); + info->fail_addr += nvm->regions[idx].offset; + return bytes; + } + + addr += len; + total_len -= len; + } + + return 0; +} + +static int intel_dg_mtd_read(struct mtd_info *mtd, loff_t from, size_t len, + size_t *retlen, u_char *buf) +{ + struct intel_dg_nvm *nvm = mtd->priv; + ssize_t ret; + unsigned int idx; + u8 region; + + if (WARN_ON(!nvm)) + return -EINVAL; + + idx = idg_nvm_get_region(nvm, from); + + dev_dbg(&mtd->dev, "reading region[%d] %s from %lld len %zd\n", + nvm->regions[idx].id, nvm->regions[idx].name, from, len); + + if (idx >= nvm->nregions) { + dev_err(&mtd->dev, "out of range"); + return -ERANGE; + } + + from -= nvm->regions[idx].offset; + region = nvm->regions[idx].id; + if (len > nvm->regions[idx].size - from) + len = nvm->regions[idx].size - from; + + guard(mutex)(&nvm->lock); + + ret = idg_read(nvm, region, from, len, buf); + if (ret < 0) { + dev_dbg(&mtd->dev, "read failed with %zd\n", ret); + return ret; + } + + *retlen = ret; + + return 0; +} + +static int intel_dg_mtd_write(struct mtd_info *mtd, loff_t to, size_t len, + size_t *retlen, const u_char *buf) +{ + struct intel_dg_nvm *nvm = mtd->priv; + ssize_t ret; + unsigned int idx; + u8 region; + + if (WARN_ON(!nvm)) + return -EINVAL; + + idx = idg_nvm_get_region(nvm, to); + + dev_dbg(&mtd->dev, "writing region[%d] %s to %lld len %zd\n", + nvm->regions[idx].id, nvm->regions[idx].name, to, len); + + if (idx >= nvm->nregions) { + dev_err(&mtd->dev, "out of range"); + return -ERANGE; + } + + to -= nvm->regions[idx].offset; + region = nvm->regions[idx].id; + if (len > nvm->regions[idx].size - to) + len = nvm->regions[idx].size - to; + + guard(mutex)(&nvm->lock); + + ret = idg_write(nvm, region, to, len, buf); + if (ret < 0) { + dev_dbg(&mtd->dev, "write failed with %zd\n", ret); + return ret; + } + + *retlen = ret; + + return 0; +} + static void intel_dg_nvm_release(struct kref *kref) { struct intel_dg_nvm *nvm = container_of(kref, struct intel_dg_nvm, refcnt); @@ -422,9 +564,80 @@ static void intel_dg_nvm_release(struct kref *kref) pr_debug("freeing intel_dg nvm\n"); for (i = 0; i < nvm->nregions; i++) kfree(nvm->regions[i].name); + mutex_destroy(&nvm->lock); kfree(nvm); } +static int intel_dg_mtd_get_device(struct mtd_info *mtd) +{ + struct mtd_info *master = mtd_get_master(mtd); + struct intel_dg_nvm *nvm = master->priv; + + if (WARN_ON(!nvm)) + return -EINVAL; + pr_debug("get mtd %s %d\n", mtd->name, kref_read(&nvm->refcnt)); + kref_get(&nvm->refcnt); + + return 0; +} + +static void intel_dg_mtd_put_device(struct mtd_info *mtd) +{ + struct mtd_info *master = mtd_get_master(mtd); + struct intel_dg_nvm *nvm = master->priv; + + if (WARN_ON(!nvm)) + return; + pr_debug("put mtd %s %d\n", mtd->name, kref_read(&nvm->refcnt)); + kref_put(&nvm->refcnt, intel_dg_nvm_release); +} + +static int intel_dg_nvm_init_mtd(struct intel_dg_nvm *nvm, struct device *device, + unsigned int nparts, bool writable_override) +{ + unsigned int i; + unsigned int n; + struct mtd_partition *parts = NULL; + int ret; + + dev_dbg(device, "registering with mtd\n"); + + nvm->mtd.owner = THIS_MODULE; + nvm->mtd.dev.parent = device; + nvm->mtd.flags = MTD_CAP_NORFLASH | MTD_WRITEABLE; + nvm->mtd.type = MTD_DATAFLASH; + nvm->mtd.priv = nvm; + nvm->mtd._write = intel_dg_mtd_write; + nvm->mtd._read = intel_dg_mtd_read; + nvm->mtd._erase = intel_dg_mtd_erase; + nvm->mtd._get_device = intel_dg_mtd_get_device; + nvm->mtd._put_device = intel_dg_mtd_put_device; + nvm->mtd.writesize = SZ_1; /* 1 byte granularity */ + nvm->mtd.erasesize = SZ_4K; /* 4K bytes granularity */ + nvm->mtd.size = nvm->size; + + parts = kcalloc(nvm->nregions, sizeof(*parts), GFP_KERNEL); + if (!parts) + return -ENOMEM; + + for (i = 0, n = 0; i < nvm->nregions && n < nparts; i++) { + if (!nvm->regions[i].is_readable) + continue; + parts[n].name = nvm->regions[i].name; + parts[n].offset = nvm->regions[i].offset; + parts[n].size = nvm->regions[i].size; + if (!nvm->regions[i].is_writable && !writable_override) + parts[n].mask_flags = MTD_WRITEABLE; + n++; + } + + ret = mtd_device_register(&nvm->mtd, parts, n); + + kfree(parts); + + return ret; +} + static int intel_dg_mtd_probe(struct auxiliary_device *aux_dev, const struct auxiliary_device_id *aux_dev_id) { @@ -454,6 +667,7 @@ static int intel_dg_mtd_probe(struct auxiliary_device *aux_dev, return -ENOMEM; kref_init(&nvm->refcnt); + mutex_init(&nvm->lock); nvm->nregions = nregions; for (n = 0, i = 0; i < INTEL_DG_NVM_REGIONS; i++) { @@ -483,6 +697,12 @@ static int intel_dg_mtd_probe(struct auxiliary_device *aux_dev, goto err; } + ret = intel_dg_nvm_init_mtd(nvm, device, ret, invm->writable_override); + if (ret) { + dev_err(device, "failed init mtd %d\n", ret); + goto err; + } + dev_set_drvdata(&aux_dev->dev, nvm); return 0; @@ -499,6 +719,8 @@ static void intel_dg_mtd_remove(struct auxiliary_device *aux_dev) if (!nvm) return; + mtd_device_unregister(&nvm->mtd); + dev_set_drvdata(&aux_dev->dev, NULL); kref_put(&nvm->refcnt, intel_dg_nvm_release); From patchwork Mon Apr 14 09:37:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 14050066 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B0C62C369BB for ; Mon, 14 Apr 2025 09:50:02 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 769B910E521; Mon, 14 Apr 2025 09:50:00 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="ArCf6PEV"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1D04B10E50D; Mon, 14 Apr 2025 09:49:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744624199; x=1776160199; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=02tjDKQNWeXHFLJTCMQ3g4lvV4tJ61jI02p0lATS/lk=; b=ArCf6PEVx5iOK4j+2vVIqr/RpUFzUtxK/97krgkfbS77pj2BvrGhKY9w RaBobxqiKrRDfs1us/hCBnD33bkixOXSYGklY5Y9/jLlVpTwKNi8/Y8aI AtGaQz5w3xxX2ZJAB9Kv8gH9ADdhUiBmgt+pYL4iYmyR+sCXrwL1vQhQR ECUI/50VcvURyj92KAUCjxGX9Tm86Zq2XerFFGJJoPTnNicBGnT0vtzjU 8I1sCLNIDjZ6FENv9DLzTcBFrRYjhJkKBVUwBZX3GBpg27yKnnEuY5hRT 3TsrtWvYsTOVp9m8Y59cdaVjDGvtM4sQ3X+9eL0s60zzE7gSOD1NJDoz2 w==; X-CSE-ConnectionGUID: K94CUoy9QtyicEs3DzyW5w== X-CSE-MsgGUID: 2hnZ1z8bTV+wnA7OaD86YQ== X-IronPort-AV: E=McAfee;i="6700,10204,11402"; a="45222772" X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="45222772" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:49:50 -0700 X-CSE-ConnectionGUID: BoZK3I7aSxKrpxalSod1/g== X-CSE-MsgGUID: vEKs8QihR2iqWcdPAP7C5g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="152954967" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:49:45 -0700 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alexander Usyskin Subject: [PATCH v8 06/12] mtd: intel-dg: align 64bit read and write Date: Mon, 14 Apr 2025 12:37:57 +0300 Message-ID: <20250414093803.2133463-7-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250414093803.2133463-1-alexander.usyskin@intel.com> References: <20250414093803.2133463-1-alexander.usyskin@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" GSC NVM controller HW errors on quad access overlapping 1K border. Align 64bit read and write to avoid readq/writeq over 1K border. Acked-by: Miquel Raynal Signed-off-by: Alexander Usyskin --- drivers/mtd/devices/mtd_intel_dg.c | 35 ++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/drivers/mtd/devices/mtd_intel_dg.c b/drivers/mtd/devices/mtd_intel_dg.c index 4023f2ebc344..3535f7b64429 100644 --- a/drivers/mtd/devices/mtd_intel_dg.c +++ b/drivers/mtd/devices/mtd_intel_dg.c @@ -238,6 +238,24 @@ static ssize_t idg_write(struct intel_dg_nvm *nvm, u8 region, len_s -= to_shift; } + if (!IS_ALIGNED(to, sizeof(u64)) && + ((to ^ (to + len_s)) & GENMASK(31, 10))) { + /* + * Workaround reads/writes across 1k-aligned addresses + * (start u32 before 1k, end u32 after) + * as this fails on hardware. + */ + u32 data; + + memcpy(&data, &buf[0], sizeof(u32)); + idg_nvm_write32(nvm, to, data); + if (idg_nvm_error(nvm)) + return -EIO; + buf += sizeof(u32); + to += sizeof(u32); + len_s -= sizeof(u32); + } + len8 = ALIGN_DOWN(len_s, sizeof(u64)); for (i = 0; i < len8; i += sizeof(u64)) { u64 data; @@ -295,6 +313,23 @@ static ssize_t idg_read(struct intel_dg_nvm *nvm, u8 region, from += from_shift; } + if (!IS_ALIGNED(from, sizeof(u64)) && + ((from ^ (from + len_s)) & GENMASK(31, 10))) { + /* + * Workaround reads/writes across 1k-aligned addresses + * (start u32 before 1k, end u32 after) + * as this fails on hardware. + */ + u32 data = idg_nvm_read32(nvm, from); + + if (idg_nvm_error(nvm)) + return -EIO; + memcpy(&buf[0], &data, sizeof(data)); + len_s -= sizeof(u32); + buf += sizeof(u32); + from += sizeof(u32); + } + len8 = ALIGN_DOWN(len_s, sizeof(u64)); for (i = 0; i < len8; i += sizeof(u64)) { u64 data = idg_nvm_read64(nvm, from + i); From patchwork Mon Apr 14 09:37:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 14050067 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 60801C369B9 for ; Mon, 14 Apr 2025 09:50:03 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9C45F10E522; Mon, 14 Apr 2025 09:50:02 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="ha/Oo563"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5A0E410E51A; Mon, 14 Apr 2025 09:50:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744624200; x=1776160200; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nfNlQaQAx8Vv722WAjFHHGbTA8WrwldfoILRmHYeMfU=; b=ha/Oo563rYzrf7q4iISGT4rFyjQD7uAwjsR45/K1zlu84+37WVrAJ3VC sPo0sObBVAsn7Y3NJB03IWWstZHGn4AwzUOWk7bdH/R0K6Hjc1CmfAYem 3pJwalbjG2Cg86X0KMuS6Mr0bRyKClvmKlRObgH3JB3oIcF2G4wjsteRv iGWn02wOHNGOaUAA/jMTGIR5Z8IVCSMPkeI0ebHzLzK06JgETkcQbA29q aXrTEONrVYXDXBHxvbLXrBJKiM6r5Kb6+1avTM7/Fa4sW5xNsCT3q8Tf1 imTgLCCBDLdHMOY8n8wM6Zii0nSZo1K/EZgiLpnYSIDPEtTfdwc7OBORA g==; X-CSE-ConnectionGUID: HLrBEeeuQwm7uAYMyEqm6w== X-CSE-MsgGUID: 5Zut7xhpSWuBENjIvtMFdg== X-IronPort-AV: E=McAfee;i="6700,10204,11402"; a="45222802" X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="45222802" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:49:56 -0700 X-CSE-ConnectionGUID: lDl/VGowT/2n8NWNG/QJjg== X-CSE-MsgGUID: DNc8iYKlQUqQ80BgbgJJ9w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="152954994" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:49:50 -0700 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alexander Usyskin Subject: [PATCH v8 07/12] mtd: intel-dg: wake card on operations Date: Mon, 14 Apr 2025 12:37:58 +0300 Message-ID: <20250414093803.2133463-8-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250414093803.2133463-1-alexander.usyskin@intel.com> References: <20250414093803.2133463-1-alexander.usyskin@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Enable runtime PM in mtd driver to notify graphics driver that whole card should be kept awake while nvm operations are performed through this driver. CC: Lucas De Marchi Acked-by: Karthik Poosa Acked-by: Miquel Raynal Signed-off-by: Alexander Usyskin --- drivers/mtd/devices/mtd_intel_dg.c | 79 +++++++++++++++++++++++++----- 1 file changed, 67 insertions(+), 12 deletions(-) diff --git a/drivers/mtd/devices/mtd_intel_dg.c b/drivers/mtd/devices/mtd_intel_dg.c index 3535f7b64429..9f4bb15a03b8 100644 --- a/drivers/mtd/devices/mtd_intel_dg.c +++ b/drivers/mtd/devices/mtd_intel_dg.c @@ -15,11 +15,14 @@ #include #include #include +#include #include #include #include #include +#define INTEL_DG_NVM_RPM_TIMEOUT 500 + struct intel_dg_nvm { struct kref refcnt; struct mtd_info mtd; @@ -460,6 +463,7 @@ static int intel_dg_mtd_erase(struct mtd_info *mtd, struct erase_info *info) loff_t from; size_t len; size_t total_len; + int ret = 0; if (WARN_ON(!nvm)) return -EINVAL; @@ -474,20 +478,28 @@ static int intel_dg_mtd_erase(struct mtd_info *mtd, struct erase_info *info) total_len = info->len; addr = info->addr; + ret = pm_runtime_resume_and_get(&mtd->dev); + if (ret < 0) { + dev_err(&mtd->dev, "rpm: get failed %d\n", ret); + return ret; + } + guard(mutex)(&nvm->lock); while (total_len > 0) { if (!IS_ALIGNED(addr, SZ_4K) || !IS_ALIGNED(total_len, SZ_4K)) { dev_err(&mtd->dev, "unaligned erase %llx %zx\n", addr, total_len); info->fail_addr = addr; - return -ERANGE; + ret = -ERANGE; + goto out; } idx = idg_nvm_get_region(nvm, addr); if (idx >= nvm->nregions) { dev_err(&mtd->dev, "out of range"); info->fail_addr = MTD_FAIL_ADDR_UNKNOWN; - return -ERANGE; + ret = -ERANGE; + goto out; } from = addr - nvm->regions[idx].offset; @@ -503,14 +515,18 @@ static int intel_dg_mtd_erase(struct mtd_info *mtd, struct erase_info *info) if (bytes < 0) { dev_dbg(&mtd->dev, "erase failed with %zd\n", bytes); info->fail_addr += nvm->regions[idx].offset; - return bytes; + ret = bytes; + goto out; } addr += len; total_len -= len; } - return 0; +out: + pm_runtime_mark_last_busy(&mtd->dev); + pm_runtime_put_autosuspend(&mtd->dev); + return ret; } static int intel_dg_mtd_read(struct mtd_info *mtd, loff_t from, size_t len, @@ -539,17 +555,25 @@ static int intel_dg_mtd_read(struct mtd_info *mtd, loff_t from, size_t len, if (len > nvm->regions[idx].size - from) len = nvm->regions[idx].size - from; + ret = pm_runtime_resume_and_get(&mtd->dev); + if (ret < 0) { + dev_err(&mtd->dev, "rpm: get failed %zd\n", ret); + return ret; + } + guard(mutex)(&nvm->lock); ret = idg_read(nvm, region, from, len, buf); if (ret < 0) { dev_dbg(&mtd->dev, "read failed with %zd\n", ret); - return ret; + } else { + *retlen = ret; + ret = 0; } - *retlen = ret; - - return 0; + pm_runtime_mark_last_busy(&mtd->dev); + pm_runtime_put_autosuspend(&mtd->dev); + return ret; } static int intel_dg_mtd_write(struct mtd_info *mtd, loff_t to, size_t len, @@ -578,17 +602,25 @@ static int intel_dg_mtd_write(struct mtd_info *mtd, loff_t to, size_t len, if (len > nvm->regions[idx].size - to) len = nvm->regions[idx].size - to; + ret = pm_runtime_resume_and_get(&mtd->dev); + if (ret < 0) { + dev_err(&mtd->dev, "rpm: get failed %zd\n", ret); + return ret; + } + guard(mutex)(&nvm->lock); ret = idg_write(nvm, region, to, len, buf); if (ret < 0) { dev_dbg(&mtd->dev, "write failed with %zd\n", ret); - return ret; + } else { + *retlen = ret; + ret = 0; } - *retlen = ret; - - return 0; + pm_runtime_mark_last_busy(&mtd->dev); + pm_runtime_put_autosuspend(&mtd->dev); + return ret; } static void intel_dg_nvm_release(struct kref *kref) @@ -670,6 +702,15 @@ static int intel_dg_nvm_init_mtd(struct intel_dg_nvm *nvm, struct device *device kfree(parts); + if (ret) + goto out; + + devm_pm_runtime_enable(&nvm->mtd.dev); + + pm_runtime_set_autosuspend_delay(&nvm->mtd.dev, INTEL_DG_NVM_RPM_TIMEOUT); + pm_runtime_use_autosuspend(&nvm->mtd.dev); + +out: return ret; } @@ -719,6 +760,17 @@ static int intel_dg_mtd_probe(struct auxiliary_device *aux_dev, } nvm->nregions = n; /* in case where kasprintf fail */ + devm_pm_runtime_enable(device); + + pm_runtime_set_autosuspend_delay(device, INTEL_DG_NVM_RPM_TIMEOUT); + pm_runtime_use_autosuspend(device); + + ret = pm_runtime_resume_and_get(device); + if (ret < 0) { + dev_err(device, "rpm: get failed %d\n", ret); + goto err_norpm; + } + nvm->base = devm_ioremap_resource(device, &invm->bar); if (IS_ERR(nvm->base)) { dev_err(device, "mmio not mapped\n"); @@ -740,9 +792,12 @@ static int intel_dg_mtd_probe(struct auxiliary_device *aux_dev, dev_set_drvdata(&aux_dev->dev, nvm); + pm_runtime_put(device); return 0; err: + pm_runtime_put(device); +err_norpm: kref_put(&nvm->refcnt, intel_dg_nvm_release); return ret; } From patchwork Mon Apr 14 09:37:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 14050068 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BCF79C369B2 for ; Mon, 14 Apr 2025 09:50:04 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 15BD410E525; Mon, 14 Apr 2025 09:50:04 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="bRq20Z77"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) by gabe.freedesktop.org (Postfix) with ESMTPS id CB9E910E523; Mon, 14 Apr 2025 09:50:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744624203; x=1776160203; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7LiBfD/MLTP3QcQKFXumLisFHNSaGJhPshphrY2rurU=; b=bRq20Z77DRE9047FD1Ki45kyAxGyyn92tklchLobmwMp0QF1pYRPwHJ0 pKXJOm80JQXUOfws25WBOwK/o7RDUE2eUAs8K+raqLYrL1FDeXtz90jbh PNnapZbcdTc2y7olAM2EjNUNfm9LcyaeQlRo0gyywhvhr1zzeeSfRQq1a qJirevYRmIKOg59mQeIzXNNaJfoeTRcB4/zKehNfB4TknZfwjte8agqoM nGvopFQhX5EvGhvDIWHqOlvHTlm2l9fnG7Kh2Dlxo3qUgELmsssEPbpAv t1UE2DjmddnRIY9mVePJ0YTEeK1q7mxyR3gwaZfcmSpVkv1YFgX3PPkGw A==; X-CSE-ConnectionGUID: /TkdcG7nQ7Sh/OyLPfeElw== X-CSE-MsgGUID: EmM48LrlSBO5XTE5U4rWPg== X-IronPort-AV: E=McAfee;i="6700,10204,11402"; a="45222834" X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="45222834" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:50:02 -0700 X-CSE-ConnectionGUID: qe+xWxpnR2OeZ64gMSaXjA== X-CSE-MsgGUID: 9SeHVpxtSaW10tCKSvVhsw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="152955032" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:49:56 -0700 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alexander Usyskin , Tomas Winkler Subject: [PATCH v8 08/12] drm/i915/nvm: add nvm device for discrete graphics Date: Mon, 14 Apr 2025 12:37:59 +0300 Message-ID: <20250414093803.2133463-9-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250414093803.2133463-1-alexander.usyskin@intel.com> References: <20250414093803.2133463-1-alexander.usyskin@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Enable access to internal non-volatile memory on DGFX devices via a child device. The nvm child device is exposed via auxiliary bus. CC: Lucas De Marchi Reviewed-by: Rodrigo Vivi Co-developed-by: Tomas Winkler Signed-off-by: Tomas Winkler Signed-off-by: Alexander Usyskin --- drivers/gpu/drm/i915/Makefile | 4 ++ drivers/gpu/drm/i915/i915_driver.c | 6 ++ drivers/gpu/drm/i915/i915_drv.h | 3 + drivers/gpu/drm/i915/i915_reg.h | 1 + drivers/gpu/drm/i915/intel_nvm.c | 92 ++++++++++++++++++++++++++++++ drivers/gpu/drm/i915/intel_nvm.h | 15 +++++ 6 files changed, 121 insertions(+) create mode 100644 drivers/gpu/drm/i915/intel_nvm.c create mode 100644 drivers/gpu/drm/i915/intel_nvm.h diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile index 13d4a16f7d33..d9bb89cce0c9 100644 --- a/drivers/gpu/drm/i915/Makefile +++ b/drivers/gpu/drm/i915/Makefile @@ -212,6 +212,10 @@ i915-y += \ i915-y += \ gt/intel_gsc.o +# graphics nvm device (DGFX) support +i915-y += \ + intel_nvm.o + # graphics hardware monitoring (HWMON) support i915-$(CONFIG_HWMON) += \ i915_hwmon.o diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915_driver.c index 97ff9855b5de..263ca12b96ae 100644 --- a/drivers/gpu/drm/i915/i915_driver.c +++ b/drivers/gpu/drm/i915/i915_driver.c @@ -83,6 +83,8 @@ #include "soc/intel_dram.h" #include "soc/intel_gmch.h" +#include "intel_nvm.h" + #include "i915_debugfs.h" #include "i915_driver.h" #include "i915_drm_client.h" @@ -648,6 +650,8 @@ static int i915_driver_register(struct drm_i915_private *dev_priv) /* Depends on sysfs having been initialized */ i915_perf_register(dev_priv); + intel_nvm_init(dev_priv); + for_each_gt(gt, dev_priv, i) intel_gt_driver_register(gt); @@ -690,6 +694,8 @@ static void i915_driver_unregister(struct drm_i915_private *dev_priv) i915_hwmon_unregister(dev_priv); + intel_nvm_fini(dev_priv); + i915_perf_unregister(dev_priv); i915_pmu_unregister(dev_priv); diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index c0eec8fe5cad..bb879dc9764b 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -65,6 +65,7 @@ struct drm_i915_clock_gating_funcs; struct vlv_s0ix_state; struct intel_pxp; +struct intel_dg_nvm_dev; #define GEM_QUIRK_PIN_SWIZZLED_PAGES BIT(0) @@ -311,6 +312,8 @@ struct drm_i915_private { struct i915_perf perf; + struct intel_dg_nvm_dev *nvm; + struct i915_hwmon *hwmon; struct intel_gt *gt[I915_MAX_GT]; diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h index 49beab8e324d..9f783cc5a8e6 100644 --- a/drivers/gpu/drm/i915/i915_reg.h +++ b/drivers/gpu/drm/i915/i915_reg.h @@ -321,6 +321,7 @@ #define DG2_GSC_HECI2_BASE 0x00374000 #define MTL_GSC_HECI1_BASE 0x00116000 #define MTL_GSC_HECI2_BASE 0x00117000 +#define GEN12_GUNIT_NVM_BASE 0x00102040 #define HECI_H_CSR(base) _MMIO((base) + 0x4) #define HECI_H_CSR_IE REG_BIT(0) diff --git a/drivers/gpu/drm/i915/intel_nvm.c b/drivers/gpu/drm/i915/intel_nvm.c new file mode 100644 index 000000000000..75d3ebe669ff --- /dev/null +++ b/drivers/gpu/drm/i915/intel_nvm.c @@ -0,0 +1,92 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright(c) 2019-2024, Intel Corporation. All rights reserved. + */ + +#include +#include +#include "i915_reg.h" +#include "i915_drv.h" +#include "intel_nvm.h" + +#define GEN12_GUNIT_NVM_SIZE 0x80 + +static const struct intel_dg_nvm_region regions[INTEL_DG_NVM_REGIONS] = { + [0] = { .name = "DESCRIPTOR", }, + [2] = { .name = "GSC", }, + [11] = { .name = "OptionROM", }, + [12] = { .name = "DAM", }, +}; + +static void i915_nvm_release_dev(struct device *dev) +{ +} + +void intel_nvm_init(struct drm_i915_private *i915) +{ + struct pci_dev *pdev = to_pci_dev(i915->drm.dev); + struct intel_dg_nvm_dev *nvm; + struct auxiliary_device *aux_dev; + int ret; + + /* Only the DGFX devices have internal NVM */ + if (!IS_DGFX(i915)) + return; + + /* Nvm pointer should be NULL here */ + if (WARN_ON(i915->nvm)) + return; + + i915->nvm = kzalloc(sizeof(*nvm), GFP_KERNEL); + if (!i915->nvm) + return; + + nvm = i915->nvm; + + nvm->writeable_override = true; + nvm->bar.parent = &pdev->resource[0]; + nvm->bar.start = GEN12_GUNIT_NVM_BASE + pdev->resource[0].start; + nvm->bar.end = nvm->bar.start + GEN12_GUNIT_NVM_SIZE - 1; + nvm->bar.flags = IORESOURCE_MEM; + nvm->bar.desc = IORES_DESC_NONE; + nvm->regions = regions; + + aux_dev = &nvm->aux_dev; + + aux_dev->name = "nvm"; + aux_dev->id = (pci_domain_nr(pdev->bus) << 16) | + PCI_DEVID(pdev->bus->number, pdev->devfn); + aux_dev->dev.parent = &pdev->dev; + aux_dev->dev.release = i915_nvm_release_dev; + + ret = auxiliary_device_init(aux_dev); + if (ret) { + drm_err(&i915->drm, "i915-nvm aux init failed %d\n", ret); + return; + } + + ret = auxiliary_device_add(aux_dev); + if (ret) { + drm_err(&i915->drm, "i915-nvm aux add failed %d\n", ret); + auxiliary_device_uninit(aux_dev); + return; + } +} + +void intel_nvm_fini(struct drm_i915_private *i915) +{ + struct intel_dg_nvm_dev *nvm = i915->nvm; + + /* Only the DGFX devices have internal NVM */ + if (!IS_DGFX(i915)) + return; + + /* Nvm pointer should not be NULL here */ + if (WARN_ON(!nvm)) + return; + + auxiliary_device_delete(&nvm->aux_dev); + auxiliary_device_uninit(&nvm->aux_dev); + kfree(nvm); + i915->nvm = NULL; +} diff --git a/drivers/gpu/drm/i915/intel_nvm.h b/drivers/gpu/drm/i915/intel_nvm.h new file mode 100644 index 000000000000..7bc3d1114a3f --- /dev/null +++ b/drivers/gpu/drm/i915/intel_nvm.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright(c) 2019-2024 Intel Corporation. All rights reserved. + */ + +#ifndef __INTEL_NVM_H__ +#define __INTEL_NVM_H__ + +struct drm_i915_private; + +void intel_nvm_init(struct drm_i915_private *i915); + +void intel_nvm_fini(struct drm_i915_private *i915); + +#endif /* __INTEL_NVM_H__ */ From patchwork Mon Apr 14 09:38:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 14050069 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 48F69C369B5 for ; Mon, 14 Apr 2025 09:50:09 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D740710E52D; Mon, 14 Apr 2025 09:50:08 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="HNIg16Ro"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8A42410E52B; Mon, 14 Apr 2025 09:50:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744624208; x=1776160208; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jp8JwSIuJk4GFCR3LmfzYuHpnYGRU8UeJvhnzagKb4c=; b=HNIg16Rof0KEy2W1TZpSTOnbZUafi4oGnYNyxoUX7xPzMumbO5YlWZrS RmKRCQG+WA4qY8sSjOJc5i/nFKvD7k8CVj5MYXSX/EJIHQy+4hn9bwoB/ YEd9dwu28sTYrCLFBNB7ULLQVZzRBZW5eC2uzgaWSjBY9w0HiaPFSlhvt gSYkl3+25DMJEDfkr0M2c0r5aZszm28podWAHr4W8HcQEhP4dX3CmJDp4 7mn4CC02l7NKtUhdsQiMrUrSXPfV6NWvvc5nqhmATPMSh+0YjdB2iKxhp Vh0HlONDiIlYmsclpR5Kxi5mzhEPusuj300KGmZneSIL1/beZs7+3RHmT A==; X-CSE-ConnectionGUID: 0cN7x2MLTKO64z+VA83+EQ== X-CSE-MsgGUID: XuZM3YVsQOupPrHfY4N2nQ== X-IronPort-AV: E=McAfee;i="6700,10204,11402"; a="45222856" X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="45222856" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:50:07 -0700 X-CSE-ConnectionGUID: RBwtBu+/TcK6uLlU31VsmA== X-CSE-MsgGUID: 10QK9666SoiL6KNjlrB6Lg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="152955063" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:50:02 -0700 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alexander Usyskin Subject: [PATCH v8 09/12] drm/i915/nvm: add support for access mode Date: Mon, 14 Apr 2025 12:38:00 +0300 Message-ID: <20250414093803.2133463-10-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250414093803.2133463-1-alexander.usyskin@intel.com> References: <20250414093803.2133463-1-alexander.usyskin@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Check NVM access mode from GSC FW status registers and overwrite access status read from SPI descriptor, if needed. Reviewed-by: Rodrigo Vivi Signed-off-by: Alexander Usyskin --- drivers/gpu/drm/i915/intel_nvm.c | 25 ++++++++++++++++++++++++- 1 file changed, 24 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/intel_nvm.c b/drivers/gpu/drm/i915/intel_nvm.c index 75d3ebe669ff..dd3999c934a7 100644 --- a/drivers/gpu/drm/i915/intel_nvm.c +++ b/drivers/gpu/drm/i915/intel_nvm.c @@ -10,6 +10,7 @@ #include "intel_nvm.h" #define GEN12_GUNIT_NVM_SIZE 0x80 +#define HECI_FW_STATUS_2_NVM_ACCESS_MODE BIT(3) static const struct intel_dg_nvm_region regions[INTEL_DG_NVM_REGIONS] = { [0] = { .name = "DESCRIPTOR", }, @@ -22,6 +23,28 @@ static void i915_nvm_release_dev(struct device *dev) { } +static bool i915_nvm_writable_override(struct drm_i915_private *i915) +{ + resource_size_t base; + bool writable_override; + + if (IS_DG1(i915)) { + base = DG1_GSC_HECI2_BASE; + } else if (IS_DG2(i915)) { + base = DG2_GSC_HECI2_BASE; + } else { + drm_err(&i915->drm, "Unknown platform\n"); + return true; + } + + writable_override = + !(intel_uncore_read(&i915->uncore, HECI_FWSTS(base, 2)) & + HECI_FW_STATUS_2_NVM_ACCESS_MODE); + if (writable_override) + drm_info(&i915->drm, "NVM access overridden by jumper\n"); + return writable_override; +} + void intel_nvm_init(struct drm_i915_private *i915) { struct pci_dev *pdev = to_pci_dev(i915->drm.dev); @@ -43,7 +66,7 @@ void intel_nvm_init(struct drm_i915_private *i915) nvm = i915->nvm; - nvm->writeable_override = true; + nvm->writable_override = i915_nvm_writable_override(i915); nvm->bar.parent = &pdev->resource[0]; nvm->bar.start = GEN12_GUNIT_NVM_BASE + pdev->resource[0].start; nvm->bar.end = nvm->bar.start + GEN12_GUNIT_NVM_SIZE - 1; From patchwork Mon Apr 14 09:38:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 14050070 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A6DC6C369B6 for ; Mon, 14 Apr 2025 09:50:14 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 35FFF10E52C; Mon, 14 Apr 2025 09:50:14 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="bltL6ltm"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) by gabe.freedesktop.org (Postfix) with ESMTPS id 68C0F10E52E; Mon, 14 Apr 2025 09:50:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744624213; x=1776160213; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CyqVlGa1C2j9MvRIN+gOG2Fu2XpBii44f/RM7gDS9Bs=; b=bltL6ltmNDIIEJqUHYTHBKtVARIczR9bQBUFwugjQ4Oe2nkxYthtn0Gs qlMDHOy9iomOPLxzdHvxkqexJ34PJOUHhc0I/K/hOPGw8RhlfIMPlKsQp g5SzuYzx1XrhC34j/A/ogUal/K8y/OmM9H4nGoEd1/s5Yahonl3iCqkNq TLKtMAv2mjeZzYWFagWojdiROA3Qi1ncmMMxa9mDPEBD7rFpZf7kjG8v8 5zX4nNbpMd0lYZiFzzDYkp13/tGn7wniozzlj6w/8Dfosx8hO9yt2R1CZ yu+2BZEIE1kmjRCkKTzoBO+RZcP7BLZe4JErQW3qQ2TmTi/+hw+Fhw7zu w==; X-CSE-ConnectionGUID: jfd3p1OPR4yEp2Gip3Gjmw== X-CSE-MsgGUID: d+nwT5wzTAinJDf2/QrUZA== X-IronPort-AV: E=McAfee;i="6700,10204,11402"; a="45222877" X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="45222877" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:50:13 -0700 X-CSE-ConnectionGUID: BrSYiWdvRemEj3T5KIyk6w== X-CSE-MsgGUID: vV6FiP/YSEiqVflYySc0Jw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="152955086" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:50:07 -0700 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alexander Usyskin Subject: [PATCH v8 10/12] drm/xe/nvm: add on-die non-volatile memory device Date: Mon, 14 Apr 2025 12:38:01 +0300 Message-ID: <20250414093803.2133463-11-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250414093803.2133463-1-alexander.usyskin@intel.com> References: <20250414093803.2133463-1-alexander.usyskin@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Enable access to internal non-volatile memory on DGFX with GSC/CSC devices via a child device. The nvm child device is exposed via auxiliary bus. Reviewed-by: Rodrigo Vivi Signed-off-by: Alexander Usyskin --- drivers/gpu/drm/xe/Makefile | 1 + drivers/gpu/drm/xe/xe_device.c | 5 ++ drivers/gpu/drm/xe/xe_device_types.h | 6 ++ drivers/gpu/drm/xe/xe_nvm.c | 101 +++++++++++++++++++++++++++ drivers/gpu/drm/xe/xe_nvm.h | 15 ++++ drivers/gpu/drm/xe/xe_pci.c | 6 ++ 6 files changed, 134 insertions(+) create mode 100644 drivers/gpu/drm/xe/xe_nvm.c create mode 100644 drivers/gpu/drm/xe/xe_nvm.h diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile index 3ecac0a38b82..1f026bf280ea 100644 --- a/drivers/gpu/drm/xe/Makefile +++ b/drivers/gpu/drm/xe/Makefile @@ -80,6 +80,7 @@ xe-y += xe_bb.o \ xe_mmio.o \ xe_mocs.o \ xe_module.o \ + xe_nvm.o \ xe_oa.o \ xe_observation.o \ xe_pat.o \ diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c index 75e753e0a682..c7755903d5e4 100644 --- a/drivers/gpu/drm/xe/xe_device.c +++ b/drivers/gpu/drm/xe/xe_device.c @@ -45,6 +45,7 @@ #include "xe_memirq.h" #include "xe_mmio.h" #include "xe_module.h" +#include "xe_nvm.h" #include "xe_oa.h" #include "xe_observation.h" #include "xe_pat.h" @@ -885,6 +886,8 @@ int xe_device_probe(struct xe_device *xe) return err; } + xe_nvm_init(xe); + err = xe_heci_gsc_init(xe); if (err) return err; @@ -938,6 +941,8 @@ void xe_device_remove(struct xe_device *xe) { xe_display_unregister(xe); + xe_nvm_fini(xe); + drm_dev_unplug(&xe->drm); xe_bo_pci_dev_remove_all(xe); diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h index b9a892c44c67..7cf4e8ee5732 100644 --- a/drivers/gpu/drm/xe/xe_device_types.h +++ b/drivers/gpu/drm/xe/xe_device_types.h @@ -35,6 +35,7 @@ #include "intel_display_device.h" #endif +struct intel_dg_nvm_dev; struct xe_ggtt; struct xe_pat_ops; struct xe_pxp; @@ -319,6 +320,8 @@ struct xe_device { u8 has_fan_control:1; /** @info.has_flat_ccs: Whether flat CCS metadata is used */ u8 has_flat_ccs:1; + /** @info.has_gsc_nvm: Device has gsc non-volatile memory */ + u8 has_gsc_nvm:1; /** @info.has_heci_cscfi: device has heci cscfi */ u8 has_heci_cscfi:1; /** @info.has_heci_gscfi: device has heci gscfi */ @@ -539,6 +542,9 @@ struct xe_device { /** @heci_gsc: graphics security controller */ struct xe_heci_gsc heci_gsc; + /** @nvm: discrete graphics non-volatile memory */ + struct intel_dg_nvm_dev *nvm; + /** @oa: oa observation subsystem */ struct xe_oa oa; diff --git a/drivers/gpu/drm/xe/xe_nvm.c b/drivers/gpu/drm/xe/xe_nvm.c new file mode 100644 index 000000000000..26de7d4472c8 --- /dev/null +++ b/drivers/gpu/drm/xe/xe_nvm.c @@ -0,0 +1,101 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright(c) 2019-2025, Intel Corporation. All rights reserved. + */ + +#include +#include + +#include "xe_device_types.h" +#include "xe_nvm.h" +#include "xe_sriov.h" + +#define GEN12_GUNIT_NVM_BASE 0x00102040 +#define GEN12_GUNIT_NVM_SIZE 0x80 +#define HECI_FW_STATUS_2_NVM_ACCESS_MODE BIT(3) + +static const struct intel_dg_nvm_region regions[INTEL_DG_NVM_REGIONS] = { + [0] = { .name = "DESCRIPTOR", }, + [2] = { .name = "GSC", }, + [11] = { .name = "OptionROM", }, + [12] = { .name = "DAM", }, +}; + +static void xe_nvm_release_dev(struct device *dev) +{ +} + +void xe_nvm_init(struct xe_device *xe) +{ + struct pci_dev *pdev = to_pci_dev(xe->drm.dev); + struct intel_dg_nvm_dev *nvm; + struct auxiliary_device *aux_dev; + int ret; + + if (!xe->info.has_gsc_nvm) + return; + + /* No access to internal NVM from VFs */ + if (IS_SRIOV_VF(xe)) + return; + + /* Nvm pointer should be NULL here */ + if (WARN_ON(xe->nvm)) + return; + + xe->nvm = kzalloc(sizeof(*nvm), GFP_KERNEL); + if (!xe->nvm) + return; + + nvm = xe->nvm; + + nvm->writeable_override = false; + nvm->bar.parent = &pdev->resource[0]; + nvm->bar.start = GEN12_GUNIT_NVM_BASE + pdev->resource[0].start; + nvm->bar.end = nvm->bar.start + GEN12_GUNIT_NVM_SIZE - 1; + nvm->bar.flags = IORESOURCE_MEM; + nvm->bar.desc = IORES_DESC_NONE; + nvm->regions = regions; + + aux_dev = &nvm->aux_dev; + + aux_dev->name = "nvm"; + aux_dev->id = (pci_domain_nr(pdev->bus) << 16) | + PCI_DEVID(pdev->bus->number, pdev->devfn); + aux_dev->dev.parent = &pdev->dev; + aux_dev->dev.release = xe_nvm_release_dev; + + ret = auxiliary_device_init(aux_dev); + if (ret) { + drm_err(&xe->drm, "xe-nvm aux init failed %d\n", ret); + return; + } + + ret = auxiliary_device_add(aux_dev); + if (ret) { + drm_err(&xe->drm, "xe-nvm aux add failed %d\n", ret); + auxiliary_device_uninit(aux_dev); + return; + } +} + +void xe_nvm_fini(struct xe_device *xe) +{ + struct intel_dg_nvm_dev *nvm = xe->nvm; + + if (!xe->info.has_gsc_nvm) + return; + + /* No access to internal NVM from VFs */ + if (IS_SRIOV_VF(xe)) + return; + + /* Nvm pointer should not be NULL here */ + if (WARN_ON(!nvm)) + return; + + auxiliary_device_delete(&nvm->aux_dev); + auxiliary_device_uninit(&nvm->aux_dev); + kfree(nvm); + xe->nvm = NULL; +} diff --git a/drivers/gpu/drm/xe/xe_nvm.h b/drivers/gpu/drm/xe/xe_nvm.h new file mode 100644 index 000000000000..5487764c180f --- /dev/null +++ b/drivers/gpu/drm/xe/xe_nvm.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright(c) 2019-2025 Intel Corporation. All rights reserved. + */ + +#ifndef __XE_NVM_H__ +#define __XE_NVM_H__ + +struct xe_device; + +void xe_nvm_init(struct xe_device *xe); + +void xe_nvm_fini(struct xe_device *xe); + +#endif diff --git a/drivers/gpu/drm/xe/xe_pci.c b/drivers/gpu/drm/xe/xe_pci.c index 4fe7e0d941a9..252539fdf25a 100644 --- a/drivers/gpu/drm/xe/xe_pci.c +++ b/drivers/gpu/drm/xe/xe_pci.c @@ -63,6 +63,7 @@ struct xe_device_desc { u8 has_display:1; u8 has_fan_control:1; + u8 has_gsc_nvm:1; u8 has_heci_gscfi:1; u8 has_heci_cscfi:1; u8 has_llc:1; @@ -270,6 +271,7 @@ static const struct xe_device_desc dg1_desc = { PLATFORM(DG1), .dma_mask_size = 39, .has_display = true, + .has_gsc_nvm = 1, .has_heci_gscfi = 1, .require_force_probe = true, }; @@ -281,6 +283,7 @@ static const u16 dg2_g12_ids[] = { INTEL_DG2_G12_IDS(NOP), 0 }; #define DG2_FEATURES \ DGFX_FEATURES, \ PLATFORM(DG2), \ + .has_gsc_nvm = 1, \ .has_heci_gscfi = 1, \ .subplatforms = (const struct xe_subplatform_desc[]) { \ { XE_SUBPLATFORM_DG2_G10, "G10", dg2_g10_ids }, \ @@ -317,6 +320,7 @@ static const __maybe_unused struct xe_device_desc pvc_desc = { PLATFORM(PVC), .dma_mask_size = 52, .has_display = false, + .has_gsc_nvm = 1, .has_heci_gscfi = 1, .max_remote_tiles = 1, .require_force_probe = true, @@ -345,6 +349,7 @@ static const struct xe_device_desc bmg_desc = { .dma_mask_size = 46, .has_display = true, .has_fan_control = true, + .has_gsc_nvm = 1, .has_heci_cscfi = 1, .needs_scratch = true, }; @@ -588,6 +593,7 @@ static int xe_info_init_early(struct xe_device *xe, xe->info.dma_mask_size = desc->dma_mask_size; xe->info.is_dgfx = desc->is_dgfx; xe->info.has_fan_control = desc->has_fan_control; + xe->info.has_gsc_nvm = desc->has_gsc_nvm; xe->info.has_heci_gscfi = desc->has_heci_gscfi; xe->info.has_heci_cscfi = desc->has_heci_cscfi; xe->info.has_llc = desc->has_llc; From patchwork Mon Apr 14 09:38:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 14050071 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0C51FC369B6 for ; Mon, 14 Apr 2025 09:50:21 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9D36B10E52E; Mon, 14 Apr 2025 09:50:20 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="ez01lb80"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) by gabe.freedesktop.org (Postfix) with ESMTPS id E055710E52E; Mon, 14 Apr 2025 09:50:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744624219; x=1776160219; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=rjKY1l/SEQEYfQRD5GQW4b/ZP8WwEI3BeQdcMqQ/5zY=; b=ez01lb80P06NoP9tqx6xP0ynkudqfNlCgjEHNsLgY2/nohVGaWra/5k8 8wnUgB3UfD+bqyEYYLahfs9agVYkn5syhHXEw+E3n8ySbf2YBIhQpQYLe JTy06pIERUz0RS6BKJIsAuRVEmDZC3iV/wetuhWH0YQhfONWl2+r0kOky nHlNhrthIRC7zRi1DplpkZfouurj+7RnZtJsOmMwBpLgLiHz36QzR4452 dyozTvqvLTqNNYo8vgm7cvp8m+1DtbPRGNafqwj3UaqJmJXpjTl/Hovck Kd31fZJkb6wHQQclITsLiHx5s9sKTudQ1gg5vnhIKsxoBxf2J/iVkrwwP w==; X-CSE-ConnectionGUID: yufKMrxiSb6H8QPCBDQ81w== X-CSE-MsgGUID: BglBeCMsRNuVLK8nz4f8mg== X-IronPort-AV: E=McAfee;i="6700,10204,11402"; a="45222894" X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="45222894" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:50:18 -0700 X-CSE-ConnectionGUID: LtaztX1MRCaNLZ4z9BnGHg== X-CSE-MsgGUID: kySOb/r+Roe2aahZEoXcTw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="152955095" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:50:13 -0700 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alexander Usyskin Subject: [PATCH v8 11/12] drm/xe/nvm: add support for access mode Date: Mon, 14 Apr 2025 12:38:02 +0300 Message-ID: <20250414093803.2133463-12-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250414093803.2133463-1-alexander.usyskin@intel.com> References: <20250414093803.2133463-1-alexander.usyskin@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Check NVM access mode from GSC FW status registers and overwrite access status read from SPI descriptor, if needed. Reviewed-by: Rodrigo Vivi Signed-off-by: Alexander Usyskin --- drivers/gpu/drm/xe/regs/xe_gsc_regs.h | 4 +++ drivers/gpu/drm/xe/xe_heci_gsc.c | 5 +--- drivers/gpu/drm/xe/xe_nvm.c | 37 ++++++++++++++++++++++++++- 3 files changed, 41 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/xe/regs/xe_gsc_regs.h b/drivers/gpu/drm/xe/regs/xe_gsc_regs.h index 7702364b65f1..9b66cc972a63 100644 --- a/drivers/gpu/drm/xe/regs/xe_gsc_regs.h +++ b/drivers/gpu/drm/xe/regs/xe_gsc_regs.h @@ -16,6 +16,10 @@ #define MTL_GSC_HECI1_BASE 0x00116000 #define MTL_GSC_HECI2_BASE 0x00117000 +#define DG1_GSC_HECI2_BASE 0x00259000 +#define PVC_GSC_HECI2_BASE 0x00285000 +#define DG2_GSC_HECI2_BASE 0x00374000 + #define HECI_H_CSR(base) XE_REG((base) + 0x4) #define HECI_H_CSR_IE REG_BIT(0) #define HECI_H_CSR_IS REG_BIT(1) diff --git a/drivers/gpu/drm/xe/xe_heci_gsc.c b/drivers/gpu/drm/xe/xe_heci_gsc.c index 27d11e06a82b..6d7b62724126 100644 --- a/drivers/gpu/drm/xe/xe_heci_gsc.c +++ b/drivers/gpu/drm/xe/xe_heci_gsc.c @@ -11,15 +11,12 @@ #include "xe_device_types.h" #include "xe_drv.h" #include "xe_heci_gsc.h" +#include "regs/xe_gsc_regs.h" #include "xe_platform_types.h" #include "xe_survivability_mode.h" #define GSC_BAR_LENGTH 0x00000FFC -#define DG1_GSC_HECI2_BASE 0x259000 -#define PVC_GSC_HECI2_BASE 0x285000 -#define DG2_GSC_HECI2_BASE 0x374000 - static void heci_gsc_irq_mask(struct irq_data *d) { /* generic irq handling */ diff --git a/drivers/gpu/drm/xe/xe_nvm.c b/drivers/gpu/drm/xe/xe_nvm.c index 26de7d4472c8..8aec20bc629a 100644 --- a/drivers/gpu/drm/xe/xe_nvm.c +++ b/drivers/gpu/drm/xe/xe_nvm.c @@ -6,8 +6,11 @@ #include #include +#include "xe_device.h" #include "xe_device_types.h" +#include "xe_mmio.h" #include "xe_nvm.h" +#include "regs/xe_gsc_regs.h" #include "xe_sriov.h" #define GEN12_GUNIT_NVM_BASE 0x00102040 @@ -25,6 +28,38 @@ static void xe_nvm_release_dev(struct device *dev) { } +static bool xe_nvm_writable_override(struct xe_device *xe) +{ + struct xe_gt *gt = xe_root_mmio_gt(xe); + resource_size_t base; + bool writable_override; + + switch (xe->info.platform) { + case XE_BATTLEMAGE: + base = DG2_GSC_HECI2_BASE; + break; + case XE_PVC: + base = PVC_GSC_HECI2_BASE; + break; + case XE_DG2: + base = DG2_GSC_HECI2_BASE; + break; + case XE_DG1: + base = DG1_GSC_HECI2_BASE; + break; + default: + drm_err(&xe->drm, "Unknown platform\n"); + return true; + } + + writable_override = + !(xe_mmio_read32(>->mmio, HECI_FWSTS2(base)) & + HECI_FW_STATUS_2_NVM_ACCESS_MODE); + if (writable_override) + drm_info(&xe->drm, "NVM access overridden by jumper\n"); + return writable_override; +} + void xe_nvm_init(struct xe_device *xe) { struct pci_dev *pdev = to_pci_dev(xe->drm.dev); @@ -49,7 +84,7 @@ void xe_nvm_init(struct xe_device *xe) nvm = xe->nvm; - nvm->writeable_override = false; + nvm->writable_override = xe_nvm_writable_override(xe); nvm->bar.parent = &pdev->resource[0]; nvm->bar.start = GEN12_GUNIT_NVM_BASE + pdev->resource[0].start; nvm->bar.end = nvm->bar.start + GEN12_GUNIT_NVM_SIZE - 1; From patchwork Mon Apr 14 09:38:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 14050072 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 727CBC369B6 for ; Mon, 14 Apr 2025 09:50:26 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 04DE010E533; Mon, 14 Apr 2025 09:50:26 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="dAPVMJDT"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) by gabe.freedesktop.org (Postfix) with ESMTPS id D002D10E536; Mon, 14 Apr 2025 09:50:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1744624225; x=1776160225; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=v6xyr5sQzPhwnMiPig6iTdAKcLxvTau35Jqv1TFXB8U=; b=dAPVMJDTKbu96oyoFO9+SS6AeT+ky6spzwo6lvYKtOwA/joJfmAfEXTG 5P57p16+ydigCdudJaCv2CIwIBjiMA91mneAeVOJsz6Cg0FMBtrqxkMfe usJ3wDostx/48H8ZEFiMzHiW/P1yk7Zg9WKB7u4C/add0IZ+BQXfdmo5o HzGMomPPh41zZ5yjchQPrqFstZsq3UwXDRVSvzIhmb2AbNrJjhsehDwLt HbWOD2VaIOFJomtNLFKwFOGRez6zM8pQtVE96II79G8fDGk0ncH2EmPl8 S7buE9lWk+B9pMKygSzifi8c6EHjigFasjhKixupuO+ebwu2Yy7TDBYZu A==; X-CSE-ConnectionGUID: SmWniql2QjCIf/U4GOhHYw== X-CSE-MsgGUID: GZYD5IK7TZ6XmTcNLqd1bg== X-IronPort-AV: E=McAfee;i="6700,10204,11402"; a="45222914" X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="45222914" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:50:24 -0700 X-CSE-ConnectionGUID: 7b6uc3grQC2YOP1O4UE/cw== X-CSE-MsgGUID: AdRWJqEtTFKQJbcKK4L2CQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,212,1739865600"; d="scan'208";a="152955117" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2025 02:50:18 -0700 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Abliyev@freedesktop.org, Alexander Usyskin Subject: [PATCH v8 12/12] drm/xe/nvm: add support for non-posted erase Date: Mon, 14 Apr 2025 12:38:03 +0300 Message-ID: <20250414093803.2133463-13-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250414093803.2133463-1-alexander.usyskin@intel.com> References: <20250414093803.2133463-1-alexander.usyskin@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: "Abliyev, Reuven" Erase command is slow on discrete graphics storage and may overshot PCI completion timeout. BMG introduces the ability to have non-posted erase. Add driver support for non-posted erase with polling for erase completion. Signed-off-by: Abliyev, Reuven Signed-off-by: Alexander Usyskin --- drivers/gpu/drm/xe/xe_nvm.c | 25 +++++++++++++++++ drivers/mtd/devices/mtd_intel_dg.c | 43 ++++++++++++++++++++++++++++-- include/linux/intel_dg_nvm_aux.h | 2 ++ 3 files changed, 68 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/xe/xe_nvm.c b/drivers/gpu/drm/xe/xe_nvm.c index 8aec20bc629a..dd91f2e37661 100644 --- a/drivers/gpu/drm/xe/xe_nvm.c +++ b/drivers/gpu/drm/xe/xe_nvm.c @@ -14,7 +14,15 @@ #include "xe_sriov.h" #define GEN12_GUNIT_NVM_BASE 0x00102040 +#define GEN12_DEBUG_NVM_BASE 0x00101018 + +#define GEN12_CNTL_PROTECTED_NVM_REG 0x0010100C + #define GEN12_GUNIT_NVM_SIZE 0x80 +#define GEN12_DEBUG_NVM_SIZE 0x4 + +#define NVM_NON_POSTED_ERASE_CHICKEN_BIT BIT(13) + #define HECI_FW_STATUS_2_NVM_ACCESS_MODE BIT(3) static const struct intel_dg_nvm_region regions[INTEL_DG_NVM_REGIONS] = { @@ -28,6 +36,16 @@ static void xe_nvm_release_dev(struct device *dev) { } +static bool xe_nvm_non_posted_erase(struct xe_device *xe) +{ + struct xe_gt *gt = xe_root_mmio_gt(xe); + + if (xe->info.platform != XE_BATTLEMAGE) + return false; + return !(xe_mmio_read32(>->mmio, XE_REG(GEN12_CNTL_PROTECTED_NVM_REG)) & + NVM_NON_POSTED_ERASE_CHICKEN_BIT); +} + static bool xe_nvm_writable_override(struct xe_device *xe) { struct xe_gt *gt = xe_root_mmio_gt(xe); @@ -85,6 +103,7 @@ void xe_nvm_init(struct xe_device *xe) nvm = xe->nvm; nvm->writable_override = xe_nvm_writable_override(xe); + nvm->non_posted_erase = xe_nvm_non_posted_erase(xe); nvm->bar.parent = &pdev->resource[0]; nvm->bar.start = GEN12_GUNIT_NVM_BASE + pdev->resource[0].start; nvm->bar.end = nvm->bar.start + GEN12_GUNIT_NVM_SIZE - 1; @@ -92,6 +111,12 @@ void xe_nvm_init(struct xe_device *xe) nvm->bar.desc = IORES_DESC_NONE; nvm->regions = regions; + nvm->bar2.parent = &pdev->resource[0]; + nvm->bar2.start = GEN12_DEBUG_NVM_BASE + pdev->resource[0].start; + nvm->bar2.end = nvm->bar2.start + GEN12_DEBUG_NVM_SIZE - 1; + nvm->bar2.flags = IORESOURCE_MEM; + nvm->bar2.desc = IORES_DESC_NONE; + aux_dev = &nvm->aux_dev; aux_dev->name = "nvm"; diff --git a/drivers/mtd/devices/mtd_intel_dg.c b/drivers/mtd/devices/mtd_intel_dg.c index 9f4bb15a03b8..c898107a588f 100644 --- a/drivers/mtd/devices/mtd_intel_dg.c +++ b/drivers/mtd/devices/mtd_intel_dg.c @@ -28,6 +28,9 @@ struct intel_dg_nvm { struct mtd_info mtd; struct mutex lock; /* region access lock */ void __iomem *base; + void __iomem *base2; + bool non_posted_erase; + size_t size; unsigned int nregions; struct { @@ -44,6 +47,7 @@ struct intel_dg_nvm { #define NVM_VALSIG_REG 0x00000010 #define NVM_ADDRESS_REG 0x00000040 #define NVM_REGION_ID_REG 0x00000044 +#define NVM_DEBUG_REG 0x00000000 /* * [15:0]-Erase size = 0x0010 4K 0x0080 32K 0x0100 64K * [23:16]-Reserved @@ -75,6 +79,9 @@ struct intel_dg_nvm { #define NVM_FREG_ADDR_SHIFT 12 #define NVM_FREG_MIN_REGION_SIZE 0xFFF +#define NVM_NON_POSTED_ERASE_DONE BIT(23) +#define NVM_NON_POSTED_ERASE_DONE_ITER 3000 + static inline void idg_nvm_set_region_id(struct intel_dg_nvm *nvm, u8 region) { iowrite32((u32)region, nvm->base + NVM_REGION_ID_REG); @@ -370,11 +377,30 @@ idg_erase(struct intel_dg_nvm *nvm, u8 region, loff_t from, u64 len, u64 *fail_a { u64 i; const u32 block = 0x10; + u32 reg; + u32 iter = 0; void __iomem *base = nvm->base; + void __iomem *base2 = nvm->base2; for (i = 0; i < len; i += SZ_4K) { iowrite32(from + i, base + NVM_ADDRESS_REG); iowrite32(region << 24 | block, base + NVM_ERASE_REG); + if (nvm->non_posted_erase) { + /* Wait for Erase Done */ + reg = ioread32(base2 + NVM_DEBUG_REG); + while (!(reg & NVM_NON_POSTED_ERASE_DONE) && + ++iter < NVM_NON_POSTED_ERASE_DONE_ITER) { + msleep(10); + reg = ioread32(base2 + NVM_DEBUG_REG); + } + if (reg & NVM_NON_POSTED_ERASE_DONE) { + /* Clear Erase Done */ + iowrite32(reg, base2 + NVM_DEBUG_REG); + } else { + *fail_addr = from + i; + return -ETIME; + } + } /* Since the writes are via sguint * we cannot do back to back erases. */ @@ -383,7 +409,8 @@ idg_erase(struct intel_dg_nvm *nvm, u8 region, loff_t from, u64 len, u64 *fail_a return len; } -static int intel_dg_nvm_init(struct intel_dg_nvm *nvm, struct device *device) +static int intel_dg_nvm_init(struct intel_dg_nvm *nvm, struct device *device, + bool non_posted_erase) { int ret; unsigned int i, n; @@ -443,7 +470,10 @@ static int intel_dg_nvm_init(struct intel_dg_nvm *nvm, struct device *device) n++; } + nvm->non_posted_erase = non_posted_erase; + dev_dbg(device, "Registered %d regions\n", n); + dev_dbg(device, "Non posted erase %d\n", nvm->non_posted_erase); /* Need to add 1 to the amount of memory * so it is reported as an even block @@ -778,7 +808,16 @@ static int intel_dg_mtd_probe(struct auxiliary_device *aux_dev, goto err; } - ret = intel_dg_nvm_init(nvm, device); + if (invm->non_posted_erase) { + nvm->base2 = devm_ioremap_resource(device, &invm->bar2); + if (IS_ERR(nvm->base2)) { + dev_err(device, "base2 mmio not mapped\n"); + ret = PTR_ERR(nvm->base2); + goto err; + } + } + + ret = intel_dg_nvm_init(nvm, device, invm->non_posted_erase); if (ret < 0) { dev_err(device, "cannot initialize nvm %d\n", ret); goto err; diff --git a/include/linux/intel_dg_nvm_aux.h b/include/linux/intel_dg_nvm_aux.h index 68df634c994c..bee25dfc6982 100644 --- a/include/linux/intel_dg_nvm_aux.h +++ b/include/linux/intel_dg_nvm_aux.h @@ -17,7 +17,9 @@ struct intel_dg_nvm_region { struct intel_dg_nvm_dev { struct auxiliary_device aux_dev; bool writable_override; + bool non_posted_erase; struct resource bar; + struct resource bar2; const struct intel_dg_nvm_region *regions; };