From patchwork Mon Aug 3 05:03:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 11697197 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DDA6A138A for ; Mon, 3 Aug 2020 05:19:23 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C606D20719 for ; Mon, 3 Aug 2020 05:19:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C606D20719 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 4E3C812A6D4C7; Sun, 2 Aug 2020 22:19:24 -0700 (PDT) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=134.134.136.20; helo=mga02.intel.com; envelope-from=dan.j.williams@intel.com; receiver= Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 2275312A6D4C0 for ; Sun, 2 Aug 2020 22:19:22 -0700 (PDT) IronPort-SDR: 3x5x8AfbqV6NdnMpjeMlmOwUlfw2MEORGhkanyJMR1PEjQuMP6d9SHQwCtDABHFz/7ywrZWwdr PTY1wzwIpHPA== X-IronPort-AV: E=McAfee;i="6000,8403,9701"; a="139998196" X-IronPort-AV: E=Sophos;i="5.75,429,1589266800"; d="scan'208";a="139998196" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Aug 2020 22:19:21 -0700 IronPort-SDR: Bf8V6g2L1auHJzH78LC49D2bN1y5zY1ENkS7gVgamn40maMCkeKjWz4yWkdrDemY6dWcNPbzOt r9bDWU0JZASQ== X-IronPort-AV: E=Sophos;i="5.75,429,1589266800"; d="scan'208";a="287876643" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Aug 2020 22:19:21 -0700 Subject: [PATCH v4 07/23] ACPI: HMAT: Attach a device for each soft-reserved range From: Dan Williams To: akpm@linux-foundation.org Date: Sun, 02 Aug 2020 22:03:03 -0700 Message-ID: <159643098298.4062302.17587338161136144730.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <159643094279.4062302.17779410714418721328.stgit@dwillia2-desk3.amr.corp.intel.com> References: <159643094279.4062302.17779410714418721328.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c MIME-Version: 1.0 Message-ID-Hash: I74VJTKPDV4H6RITOJCC5BOTPLT7RSIN X-Message-ID-Hash: I74VJTKPDV4H6RITOJCC5BOTPLT7RSIN X-MailFrom: dan.j.williams@intel.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; suspicious-header CC: Jonathan Cameron , Brice Goglin , Ard Biesheuvel , "Rafael J. Wysocki" , Catalin Marinas , Will Deacon , joao.m.martins@oracle.com, peterz@infradead.org, dave.hansen@linux.intel.com, linux-mm@kvack.org, linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org, linux-acpi@vger.kernel.org, dri-devel@lists.freedesktop.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: The hmem enabling in commit 'cf8741ac57ed ("ACPI: NUMA: HMAT: Register "soft reserved" memory as an "hmem" device")' only registered ranges to the hmem driver for each soft-reservation that also appeared in the HMAT. While this is meant to encourage platform firmware to "do the right thing" and publish an HMAT, the corollary is that platforms that fail to publish an accurate HMAT will strand memory from Linux usage. Additionally, the "efi_fake_mem" kernel command line option enabling will strand memory by default without an HMAT. Arrange for "soft reserved" memory that goes unclaimed by HMAT entries to be published as raw resource ranges for the hmem driver to consume. Include a module parameter to disable either this fallback behavior, or the hmat enabling from creating hmem devices. The module parameter requires the hmem device enabling to have unique name in the module namespace: "device_hmem". The driver depends on the architecture providing phys_to_target_node() which is only x86 via numa_meminfo() and arm64 via a generic memblock implementation. Cc: Jonathan Cameron Cc: Brice Goglin Cc: Ard Biesheuvel Cc: "Rafael J. Wysocki" Cc: Jeff Moyer Cc: Catalin Marinas Cc: Will Deacon Reviewed-by: Joao Martins Signed-off-by: Dan Williams --- drivers/dax/hmem/Makefile | 3 ++- drivers/dax/hmem/device.c | 35 +++++++++++++++++++++++++++++++++++ 2 files changed, 37 insertions(+), 1 deletion(-) diff --git a/drivers/dax/hmem/Makefile b/drivers/dax/hmem/Makefile index a9d353d0c9ed..57377b4c3d47 100644 --- a/drivers/dax/hmem/Makefile +++ b/drivers/dax/hmem/Makefile @@ -1,5 +1,6 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_DEV_DAX_HMEM) += dax_hmem.o -obj-$(CONFIG_DEV_DAX_HMEM_DEVICES) += device.o +obj-$(CONFIG_DEV_DAX_HMEM_DEVICES) += device_hmem.o +device_hmem-y := device.o dax_hmem-y := hmem.o diff --git a/drivers/dax/hmem/device.c b/drivers/dax/hmem/device.c index b9dd6b27745c..cb6401c9e9a4 100644 --- a/drivers/dax/hmem/device.c +++ b/drivers/dax/hmem/device.c @@ -5,6 +5,9 @@ #include #include +static bool nohmem; +module_param_named(disable, nohmem, bool, 0444); + void hmem_register_device(int target_nid, struct resource *r) { /* define a clean / non-busy resource for the platform device */ @@ -17,6 +20,9 @@ void hmem_register_device(int target_nid, struct resource *r) struct memregion_info info; int rc, id; + if (nohmem) + return; + rc = region_intersects(res.start, resource_size(&res), IORESOURCE_MEM, IORES_DESC_SOFT_RESERVED); if (rc != REGION_INTERSECTS) @@ -63,3 +69,32 @@ void hmem_register_device(int target_nid, struct resource *r) out_pdev: memregion_free(id); } + +static __init int hmem_register_one(struct resource *res, void *data) +{ + /* + * If the resource is not a top-level resource it was already + * assigned to a device by the HMAT parsing. + */ + if (res->parent != &iomem_resource) { + pr_info("HMEM: skip %pr, already claimed\n", res); + return 0; + } + + hmem_register_device(phys_to_target_node(res->start), res); + + return 0; +} + +static __init int hmem_init(void) +{ + walk_iomem_res_desc(IORES_DESC_SOFT_RESERVED, + IORESOURCE_MEM, 0, -1, NULL, hmem_register_one); + return 0; +} + +/* + * As this is a fallback for address ranges unclaimed by the ACPI HMAT + * parsing it must be at an initcall level greater than hmat_init(). + */ +late_initcall(hmem_init);