From patchwork Thu Sep 13 02:22:06 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 10598649 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9B2F615E2 for ; Thu, 13 Sep 2018 02:33:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8A3EC2992E for ; Thu, 13 Sep 2018 02:33:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7E5A329BB3; Thu, 13 Sep 2018 02:33:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 10AF22992E for ; Thu, 13 Sep 2018 02:33:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E82238E0003; Wed, 12 Sep 2018 22:33:50 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E320B8E0001; Wed, 12 Sep 2018 22:33:50 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D48BF8E0003; Wed, 12 Sep 2018 22:33:50 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) by kanga.kvack.org (Postfix) with ESMTP id 9534E8E0001 for ; Wed, 12 Sep 2018 22:33:50 -0400 (EDT) Received: by mail-pg1-f199.google.com with SMTP id r130-v6so1817582pgr.13 for ; Wed, 12 Sep 2018 19:33:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:from :to:cc:date:message-id:in-reply-to:references:user-agent :mime-version:content-transfer-encoding; bh=rIPE4RnuP/kYES3vOwi6r5TNvRJWZXK6aqhNeMFAG74=; b=VwPOBrrLk/uyPQ/OX01j76g8rzK8f7OnStdsQ3Yq+CaqBJ+L2JCEH4yJDYZcR5SeKd yWmjXvZmtgfISDM67toL68VhbKmqmmPDC+ZY5r3s5jxTquZqatOYeTmqW7KA2GGPifB3 cpejpdrQ6uHS7+I/tSmau+0P7lPL+y6umggiCVEwnJC4cTQ3ccFkn6zFp7mNEBbWyZtC d1K3OqgfTQX4nrxQEDtQIoSsr+hlAD/Mt15680+bBpBJl14el3lEphQDGo0v2ttI4dca gBSuZaFbAewVSHdMK/EKnpEG+oMZY0TbwWXCvnDZxKVRmM75kVSwdju+tOH8j8Z5h9a7 H25w== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.136 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: APzg51BKmM6gl/mQYl3dV2cVi7ZsEeGuXIPa7EdNM8PJyIGx8fBXqmel l5VQsOsTTlWIkI3XW7Wk7c2RtOkyBMh5OfczbTS9xgAbgZBMuQhGTVWceQ3ohZyvvhZGZDkATvJ OQ7KoS9QBUG1KdvxqL+iSTgJFOVy/j6otUuF75VRcRFSnJHROZHR7f1oxwJ9W5tgBhQ== X-Received: by 2002:a63:6485:: with SMTP id y127-v6mr4916759pgb.393.1536806030275; Wed, 12 Sep 2018 19:33:50 -0700 (PDT) X-Google-Smtp-Source: ANB0VdYWDsXgasVvCY+Ot848boHTiTYHyr+i/YLzR/9UL5HPTGj9wm6JMvRNbqllm6oFn6x6CYyy X-Received: by 2002:a63:6485:: with SMTP id y127-v6mr4916728pgb.393.1536806029371; Wed, 12 Sep 2018 19:33:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536806029; cv=none; d=google.com; s=arc-20160816; b=wvoazsdCgcnzXMcgpgcak6gntje8ZZYSC99nYPUAFXqGWgcqEHeSR3yeck9B1PB/AG VAcloKV9MGODrJHWLHLR+9SzRbJIdTBfHX5EilLI9c4GLvF6rc47CLYuhbXhhrtnJOOK 1zUrusRyXTAzRZt7vU2ftr4+J8zk1g5aGqfiQ3IeMCP271fA8QYnSTpxu+YdXXGm7EqQ 0nQbhZEd8114kqaWAxvPekpivitJvm0SWEVbev8UCf3grgZSiJb/NJQKxm115XoqJoMH 4rynQ9vH/VEk3UjWk4/TSZUb5TR3wo6k1tGOez3Kgqn7cQF0n2RVQMVL9VXp9eLEfRRA JHmQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:user-agent:references :in-reply-to:message-id:date:cc:to:from:subject; bh=rIPE4RnuP/kYES3vOwi6r5TNvRJWZXK6aqhNeMFAG74=; b=z/JRe4T8WR6yx5EE0hoW9kSOPzVnEtTLKwrX4xLzTaBENHGaGPFJYgZJ6JviWclOdz QaevOAvFSr2bYFgAcZlX1XL0ZVX45Uq5b5ncido1EyU1adoQ1+XWiZrStgJ3px/bhHQb 0Q4o9eFvHSD1/TLbbhpBgweNYjLGzTrLKHT8tQy2btcZUkSVAW8xrpvhEiERakm8gJZ1 KXYOuUfurRjdaX9PcyvQy4vIWplfbuwnx4uuuj+8sAydZmzzXEKGpxrLwchfsKw3awuE FRgY2lISlwemmvm1uKc9hchCEJ2OpFeaAiDbQHTrShXoID2+Gri7Wex09zPvOIohUBTt mQAQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.136 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga12.intel.com (mga12.intel.com. [192.55.52.136]) by mx.google.com with ESMTPS id i5-v6si2696010pgk.200.2018.09.12.19.33.49 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Sep 2018 19:33:49 -0700 (PDT) Received-SPF: pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.136 as permitted sender) client-ip=192.55.52.136; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.136 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Sep 2018 19:33:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.53,367,1531810800"; d="scan'208";a="69588622" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by fmsmga007.fm.intel.com with ESMTP; 12 Sep 2018 19:33:47 -0700 Subject: [PATCH v5 1/7] mm, devm_memremap_pages: Mark devm_memremap_pages() EXPORT_SYMBOL_GPL From: Dan Williams To: akpm@linux-foundation.org Cc: Michal Hocko , =?utf-8?b?SsOpcsO0bWU=?= Glisse , Christoph Hellwig , alexander.h.duyck@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Wed, 12 Sep 2018 19:22:06 -0700 Message-ID: <153680532635.453305.11297363695024516117.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <153680531988.453305.8080706591516037706.stgit@dwillia2-desk3.amr.corp.intel.com> References: <153680531988.453305.8080706591516037706.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-2-gc94f MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP devm_memremap_pages() is a facility that can create struct page entries for any arbitrary range and give drivers the ability to subvert core aspects of page management. Specifically the facility is tightly integrated with the kernel's memory hotplug functionality. It injects an altmap argument deep into the architecture specific vmemmap implementation to allow allocating from specific reserved pages, and it has Linux specific assumptions about page structure reference counting relative to get_user_pages() and get_user_pages_fast(). It was an oversight and a mistake that this was not marked EXPORT_SYMBOL_GPL from the outset. Again, devm_memremap_pagex() exposes and relies upon core kernel internal assumptions and will continue to evolve along with 'struct page', memory hotplug, and support for new memory types / topologies. Only an in-kernel GPL-only driver is expected to keep up with this ongoing evolution. This interface, and functionality derived from this interface, is not suitable for kernel-external drivers. Cc: Michal Hocko Cc: "Jérôme Glisse" Reviewed-by: Christoph Hellwig Signed-off-by: Dan Williams Reviewed-by: Logan Gunthorpe --- kernel/memremap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/memremap.c b/kernel/memremap.c index 5b8600d39931..f95c7833db6d 100644 --- a/kernel/memremap.c +++ b/kernel/memremap.c @@ -283,7 +283,7 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) pgmap_radix_release(res, pgoff); return ERR_PTR(error); } -EXPORT_SYMBOL(devm_memremap_pages); +EXPORT_SYMBOL_GPL(devm_memremap_pages); unsigned long vmem_altmap_offset(struct vmem_altmap *altmap) { From patchwork Thu Sep 13 02:22:11 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 10598651 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B6B0C15A7 for ; Thu, 13 Sep 2018 02:33:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A73942992E for ; Thu, 13 Sep 2018 02:33:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9B5A629BB3; Thu, 13 Sep 2018 02:33:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3931D2992E for ; Thu, 13 Sep 2018 02:33:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 24CA18E0004; Wed, 12 Sep 2018 22:33:56 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1FDA38E0001; Wed, 12 Sep 2018 22:33:56 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 115C28E0004; Wed, 12 Sep 2018 22:33:56 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by kanga.kvack.org (Postfix) with ESMTP id C50B18E0001 for ; Wed, 12 Sep 2018 22:33:55 -0400 (EDT) Received: by mail-pf1-f197.google.com with SMTP id x19-v6so2049478pfh.15 for ; Wed, 12 Sep 2018 19:33:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:from :to:cc:date:message-id:in-reply-to:references:user-agent :mime-version:content-transfer-encoding; bh=KOLPlgeGu5a++gbV/luqI4Dh4pWL/XbVTQA3AJakNjQ=; b=cXrFX+sS6eugrmZ50PNkKnbiDUUUTBe3V/yOd4O/oFHNJDE5Lj803nDr4mPa8OzUlF OV+kRKXBqjYPR0WuwdrjpYktzb0gNgvwn6HrI8QjyhvaDK9QPCAvVHDkTVnKL6MTfdAE z8J+V83axrE6ClNC91VKCGdGkUig/JgddeK/SpvDIwvMkWz9DVEtT34ZWCTHMO2V/0ca BVIzsVaKMfg3yfD3M1paDTe9n/zLllDvekSzd7vZDmb8spWcCpeNLBOmdIhqzE5wtsN8 jfMfObZVK3jZt4HxQKSQZGlhk0ozi3f4OvFF8tZ9sidHN9Z9USCHCQzrEzfvvdDGjVdi 6geg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: APzg51B5lCKTjMAUzaxT1b3fBDVg3CABDj6GPuGNA/3ZYaoLIabakBxk +/nBJYlPBYCxmwmPFscoN29PsKFrvWx8IwWSYbbiO6rl0GiHdRmktJoZHuWpXATWt0o083hN5Da E/GJeFcbaymcmBPhe4t1DU5cCOXk4nzrzjsHsvtJTxXSaOuUVGvRVsNDGAREbXvoJ4A== X-Received: by 2002:a17:902:7d83:: with SMTP id a3-v6mr5064816plm.0.1536806035502; Wed, 12 Sep 2018 19:33:55 -0700 (PDT) X-Google-Smtp-Source: ANB0VdbSnRDQS4vJesQiEyYhpL0nGFp2xzdYS/BowQs1I9EY9zX6j1L6xKlwFQeyL1UyYROqRzMn X-Received: by 2002:a17:902:7d83:: with SMTP id a3-v6mr5064783plm.0.1536806034707; Wed, 12 Sep 2018 19:33:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536806034; cv=none; d=google.com; s=arc-20160816; b=IoBLtdcWORq7Flp/S0KeCa2oFA4CIaLmX/k9hQS/8oaDpX7byJNUZFNYzjHlSYx0xT TSJ7g0MHEJnwqlRzaQ/Oz/fkRGmDTCDBXcUe+ZY3JqGQftTFQzaoJaA3roo29n1oqwqq YqsTl9gGeGMkOJNBcROtCydwY388FpNwkUph3pQW51KLtjPOQZHHLtNNavpcRRLb4s04 PuSEMfMk6LYdwEJv4Y+gQ5R41rcN3LzvJMgQ4brIzQ5A+JJgTVUA+VGFCadQlehxOOo2 1GRNEpuboyx2dkfG7Pfb4WPlqy8meYdIBdIctHdwWLKLuPjOC+r5t/mLWldRqczD0PDG dLGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:user-agent:references :in-reply-to:message-id:date:cc:to:from:subject; bh=KOLPlgeGu5a++gbV/luqI4Dh4pWL/XbVTQA3AJakNjQ=; b=rJmurMmcKGcXkD2M48EPEccz+vEdGnyH1g+jTvCBMHm+OgpVRkdH7lV9VVoURt69V+ TTdtMnktjG8InwiJR5l1CmQVh94gHj3gx/dG/bzC8wkImw8uTt+fqQjdphlEDNzUnTe+ ZrTjKM2yWXBLSA9JRREPo47rRdjjeAGeeeV4QXOHtbv6+8tvonZ4fOkmiI++ijgWC9VY HTLpzIbrDhYQ4Ek9W3V60FAgQLLxciEK6Eru3SZFibaJYcyStq1LtdwlSJBbypVQe6oV 4+KqM7gn/oD4WZ9xCFxYPu1VtV65BulIQJtF2qo2zCkAUD8+f6fOmhMnAvjQbhCfZ9jW p0EQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga14.intel.com (mga14.intel.com. [192.55.52.115]) by mx.google.com with ESMTPS id p84-v6si2912481pfj.101.2018.09.12.19.33.54 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Sep 2018 19:33:54 -0700 (PDT) Received-SPF: pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.115 as permitted sender) client-ip=192.55.52.115; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Sep 2018 19:33:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.53,367,1531810800"; d="scan'208";a="70448393" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by fmsmga008.fm.intel.com with ESMTP; 12 Sep 2018 19:33:53 -0700 Subject: [PATCH v5 2/7] mm, devm_memremap_pages: Kill mapping "System RAM" support From: Dan Williams To: akpm@linux-foundation.org Cc: Christoph Hellwig , =?utf-8?b?SsOpcsO0bWU=?= Glisse , Logan Gunthorpe , alexander.h.duyck@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Wed, 12 Sep 2018 19:22:11 -0700 Message-ID: <153680533172.453305.5701902165148172434.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <153680531988.453305.8080706591516037706.stgit@dwillia2-desk3.amr.corp.intel.com> References: <153680531988.453305.8080706591516037706.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-2-gc94f MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Given the fact that devm_memremap_pages() requires a percpu_ref that is torn down by devm_memremap_pages_release() the current support for mapping RAM is broken. Support for remapping "System RAM" has been broken since the beginning and there is no existing user of this this code path, so just kill the support and make it an explicit error. This cleanup also simplifies a follow-on patch to fix the error path when setting a devm release action for devm_memremap_pages_release() fails. Cc: Christoph Hellwig Cc: "Jérôme Glisse" Cc: Logan Gunthorpe Signed-off-by: Dan Williams Reviewed-by: Logan Gunthorpe Signed-off-by: Christoph Hellwig Reviewed-by: Jérôme Glisse --- kernel/memremap.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/kernel/memremap.c b/kernel/memremap.c index f95c7833db6d..92e838127767 100644 --- a/kernel/memremap.c +++ b/kernel/memremap.c @@ -202,15 +202,12 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) is_ram = region_intersects(align_start, align_size, IORESOURCE_SYSTEM_RAM, IORES_DESC_NONE); - if (is_ram == REGION_MIXED) { - WARN_ONCE(1, "%s attempted on mixed region %pr\n", - __func__, res); + if (is_ram != REGION_DISJOINT) { + WARN_ONCE(1, "%s attempted on %s region %pr\n", __func__, + is_ram == REGION_MIXED ? "mixed" : "ram", res); return ERR_PTR(-ENXIO); } - if (is_ram == REGION_INTERSECTS) - return __va(res->start); - if (!pgmap->ref) return ERR_PTR(-EINVAL); From patchwork Thu Sep 13 02:22:17 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 10598661 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C54E715A7 for ; Thu, 13 Sep 2018 02:34:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B3EF32992E for ; Thu, 13 Sep 2018 02:34:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A7CD329BB3; Thu, 13 Sep 2018 02:34:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ACB9D2992E for ; Thu, 13 Sep 2018 02:34:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DE4728E0009; Wed, 12 Sep 2018 22:34:24 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D96C38E0001; Wed, 12 Sep 2018 22:34:24 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C87EA8E0009; Wed, 12 Sep 2018 22:34:24 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by kanga.kvack.org (Postfix) with ESMTP id 885968E0001 for ; Wed, 12 Sep 2018 22:34:24 -0400 (EDT) Received: by mail-pl1-f199.google.com with SMTP id g36-v6so1938988plb.5 for ; Wed, 12 Sep 2018 19:34:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:from :to:cc:date:message-id:in-reply-to:references:user-agent :mime-version:content-transfer-encoding; bh=XzWJURWv92kn6xWkqNLNofU06eLUXRNO87b58iJOD6M=; b=tNfG4i3o7i9U8V3wxpVow9d9K2+kjSQlfDUkLb7UYz2s0dSfLKBpMERBpWICgazK3S Cphegw2KSuUMAsAziEsd9F00B6kzbLuEEVpDpJn7QyrmBrvvIvwPRyzhqXgfbdsVPFpv RWwRs/5vrNpnFgyq2FWSD57p57UJCsFa1ljOcBPiWncVlkjs1zaZa7CuHEu2diMDnY+q 82sJ1nwAdI94xhlzFr3gRrR2q5OtoGUhI9u+zceA8dWdtwgaoH6yHkomstJCakagKM69 oxPZKUcGia+pi2wYCCSabPAZwWa09jCU3qZw8CWZCjf8Dea3KBO6QNNetrWjB5w+tAc9 lLYw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: APzg51B1/zUcRlHGFblKLXRT3gbGuDv1NRyyVWq8B6gTd/Rt0zGzP+KF BkJ4qn+vwvsPWOe/hwpbgXWTjn7r9Z0H7fpF0b8J4IKQ7fChpWVK07a9UH/WXcO/WdwI8HWNjE+ mgIyggEXvTRcmFXUu5Il89BGNJjSnlFbWGVYcowzSfC4CaTpWhPp1ttDq2IxT24JIDQ== X-Received: by 2002:a63:f902:: with SMTP id h2-v6mr5018647pgi.154.1536806064177; Wed, 12 Sep 2018 19:34:24 -0700 (PDT) X-Google-Smtp-Source: ANB0VdaoGuizcEUBSqOIo1/46HfrTUYejrHQhXNGYy9Yout1Q8s1rTZFWSp4PGP/Hu7l5V6bI3eR X-Received: by 2002:a63:f902:: with SMTP id h2-v6mr5018593pgi.154.1536806062824; Wed, 12 Sep 2018 19:34:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536806062; cv=none; d=google.com; s=arc-20160816; b=0DkK5YkGe/nnKFap6Z/LaW6OP6sqc0NA9i1dx3p4Bh0D8RDQF3aofKGRBAm+7DtRxw JLRlP7HyjTv6scyo8UzRUk2MwNd+FGXbtHm6PATPgR2Sy2hV4fbLqh5uQqHYELDfuEZc 75oYoo67gEGszCJp7Bh0xoUliciDNa4ebFu59XzUJ7cJkqCtfcP2R5UriLEzxy5SPsXR GaYfn4RoXjeVKQ42vpm/QETQlj8RJuxX68R+kCOrlPdYus7ZYuB21wkTFebTM7dkOzpm 3PbT7Bvq7vdtWOYzoRmRf8nMdgdGmaDQ1IQ0HgbJGFmZuA3DliAV3hV4E3MnJSuRjoen B5kw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:user-agent:references :in-reply-to:message-id:date:cc:to:from:subject; bh=XzWJURWv92kn6xWkqNLNofU06eLUXRNO87b58iJOD6M=; b=MbmUQCIapvTSsM3FkZZV+BeiyBErWzZAgT8dZvYXvx0kCbdNZYQo8gFak+EleYzKAt faGnEWTwgcyv/2VF6oULsXmzOrMEnIKpqTsUfTljs9MaeuhrrSip+mMBT9TPTCUpPalf 6FXMm6P5azEY8PZHtblCR6fh/cIk3cowSL85rIj0NK66zvKvyTSDgCUIEJNx2Y00Jbm/ dB6fvSSekiKasthtnvyTn9FdHyazhtoSOrep5vgtsBA4k3Hfs4fyH26oatWw13i4P1yc MpjiXSLNFcpXPwQNONVO48GuiYeOcIvDc1Q1n4O0JvNItiEYbUZekc1d9cMyfJe1zVs+ +Ltw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga18.intel.com (mga18.intel.com. [134.134.136.126]) by mx.google.com with ESMTPS id c31-v6si2902329pgl.126.2018.09.12.19.34.22 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Sep 2018 19:34:22 -0700 (PDT) Received-SPF: pass (google.com: domain of dan.j.williams@intel.com designates 134.134.136.126 as permitted sender) client-ip=134.134.136.126; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Sep 2018 19:34:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.53,367,1531810800"; d="scan'208";a="80036427" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by FMSMGA003.fm.intel.com with ESMTP; 12 Sep 2018 19:33:59 -0700 Subject: [PATCH v5 3/7] mm, devm_memremap_pages: Fix shutdown handling From: Dan Williams To: akpm@linux-foundation.org Cc: stable@vger.kernel.org, Christoph Hellwig , =?utf-8?b?SsOp?= =?utf-8?b?csO0bWU=?= Glisse , Logan Gunthorpe , Logan Gunthorpe , alexander.h.duyck@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Wed, 12 Sep 2018 19:22:17 -0700 Message-ID: <153680533706.453305.3428304103990941022.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <153680531988.453305.8080706591516037706.stgit@dwillia2-desk3.amr.corp.intel.com> References: <153680531988.453305.8080706591516037706.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-2-gc94f MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The last step before devm_memremap_pages() returns success is to allocate a release action, devm_memremap_pages_release(), to tear the entire setup down. However, the result from devm_add_action() is not checked. Checking the error from devm_add_action() is not enough. The api currently relies on the fact that the percpu_ref it is using is killed by the time the devm_memremap_pages_release() is run. Rather than continue this awkward situation, offload the responsibility of killing the percpu_ref to devm_memremap_pages_release() directly. This allows devm_memremap_pages() to do the right thing relative to init failures and shutdown. Without this change we could fail to register the teardown of devm_memremap_pages(). The likelihood of hitting this failure is tiny as small memory allocations almost always succeed. However, the impact of the failure is large given any future reconfiguration, or disable/enable, of an nvdimm namespace will fail forever as subsequent calls to devm_memremap_pages() will fail to setup the pgmap_radix since there will be stale entries for the physical address range. An argument could be made to require that the ->kill() operation be set in the @pgmap arg rather than passed in separately. However, it helps code readability, tracking the lifetime of a given instance, to be able to grep the kill routine directly at the devm_memremap_pages() call site. Cc: Fixes: e8d513483300 ("memremap: change devm_memremap_pages interface...") Cc: Christoph Hellwig Cc: "Jérôme Glisse" Reported-by: Logan Gunthorpe Reviewed-by: Logan Gunthorpe Signed-off-by: Dan Williams Reviewed-by: Jérôme Glisse --- drivers/dax/pmem.c | 15 +++------------ drivers/nvdimm/pmem.c | 18 ++++++++---------- include/linux/memremap.h | 7 +++++-- kernel/memremap.c | 36 +++++++++++++++++++----------------- tools/testing/nvdimm/test/iomap.c | 21 ++++++++++++++++++--- 5 files changed, 53 insertions(+), 44 deletions(-) diff --git a/drivers/dax/pmem.c b/drivers/dax/pmem.c index 99e2aace8078..c1e03d769e6d 100644 --- a/drivers/dax/pmem.c +++ b/drivers/dax/pmem.c @@ -48,9 +48,8 @@ static void dax_pmem_percpu_exit(void *data) percpu_ref_exit(ref); } -static void dax_pmem_percpu_kill(void *data) +static void dax_pmem_percpu_kill(struct percpu_ref *ref) { - struct percpu_ref *ref = data; struct dax_pmem *dax_pmem = to_dax_pmem(ref); dev_dbg(dax_pmem->dev, "trace\n"); @@ -112,17 +111,9 @@ static int dax_pmem_probe(struct device *dev) } dax_pmem->pgmap.ref = &dax_pmem->ref; - addr = devm_memremap_pages(dev, &dax_pmem->pgmap); - if (IS_ERR(addr)) { - devm_remove_action(dev, dax_pmem_percpu_exit, &dax_pmem->ref); - percpu_ref_exit(&dax_pmem->ref); + addr = devm_memremap_pages(dev, &dax_pmem->pgmap, dax_pmem_percpu_kill); + if (IS_ERR(addr)) return PTR_ERR(addr); - } - - rc = devm_add_action_or_reset(dev, dax_pmem_percpu_kill, - &dax_pmem->ref); - if (rc) - return rc; /* adjust the dax_region resource to the start of data */ memcpy(&res, &dax_pmem->pgmap.res, sizeof(res)); diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index 6071e2942053..c9cad5ebea5b 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -309,8 +309,11 @@ static void pmem_release_queue(void *q) blk_cleanup_queue(q); } -static void pmem_freeze_queue(void *q) +static void pmem_freeze_queue(struct percpu_ref *ref) { + struct request_queue *q; + + q = container_of(ref, typeof(*q), q_usage_counter); blk_freeze_queue_start(q); } @@ -405,7 +408,8 @@ static int pmem_attach_disk(struct device *dev, if (is_nd_pfn(dev)) { if (setup_pagemap_fsdax(dev, &pmem->pgmap)) return -ENOMEM; - addr = devm_memremap_pages(dev, &pmem->pgmap); + addr = devm_memremap_pages(dev, &pmem->pgmap, + pmem_freeze_queue); pfn_sb = nd_pfn->pfn_sb; pmem->data_offset = le64_to_cpu(pfn_sb->dataoff); pmem->pfn_pad = resource_size(res) - @@ -418,20 +422,14 @@ static int pmem_attach_disk(struct device *dev, pmem->pgmap.altmap_valid = false; if (setup_pagemap_fsdax(dev, &pmem->pgmap)) return -ENOMEM; - addr = devm_memremap_pages(dev, &pmem->pgmap); + addr = devm_memremap_pages(dev, &pmem->pgmap, + pmem_freeze_queue); pmem->pfn_flags |= PFN_MAP; memcpy(&bb_res, &pmem->pgmap.res, sizeof(bb_res)); } else addr = devm_memremap(dev, pmem->phys_addr, pmem->size, ARCH_MEMREMAP_PMEM); - /* - * At release time the queue must be frozen before - * devm_memremap_pages is unwound - */ - if (devm_add_action_or_reset(dev, pmem_freeze_queue, q)) - return -ENOMEM; - if (IS_ERR(addr)) return PTR_ERR(addr); pmem->virt_addr = addr; diff --git a/include/linux/memremap.h b/include/linux/memremap.h index f91f9e763557..71f5e7c7dfb9 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -106,6 +106,7 @@ typedef void (*dev_page_free_t)(struct page *page, void *data); * @altmap: pre-allocated/reserved memory for vmemmap allocations * @res: physical address range covered by @ref * @ref: reference count that pins the devm_memremap_pages() mapping + * @kill: callback to transition @ref to the dead state * @dev: host device of the mapping for debug * @data: private data pointer for page_free() * @type: memory type: see MEMORY_* in memory_hotplug.h @@ -117,13 +118,15 @@ struct dev_pagemap { bool altmap_valid; struct resource res; struct percpu_ref *ref; + void (*kill)(struct percpu_ref *ref); struct device *dev; void *data; enum memory_type type; }; #ifdef CONFIG_ZONE_DEVICE -void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap); +void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap, + void (*kill)(struct percpu_ref *)); struct dev_pagemap *get_dev_pagemap(unsigned long pfn, struct dev_pagemap *pgmap); @@ -131,7 +134,7 @@ unsigned long vmem_altmap_offset(struct vmem_altmap *altmap); void vmem_altmap_free(struct vmem_altmap *altmap, unsigned long nr_pfns); #else static inline void *devm_memremap_pages(struct device *dev, - struct dev_pagemap *pgmap) + struct dev_pagemap *pgmap, void (*kill)(struct percpu_ref *)) { /* * Fail attempts to call devm_memremap_pages() without diff --git a/kernel/memremap.c b/kernel/memremap.c index 92e838127767..ab5eb570d28d 100644 --- a/kernel/memremap.c +++ b/kernel/memremap.c @@ -122,14 +122,10 @@ static void devm_memremap_pages_release(void *data) resource_size_t align_start, align_size; unsigned long pfn; + pgmap->kill(pgmap->ref); for_each_device_pfn(pfn, pgmap) put_page(pfn_to_page(pfn)); - if (percpu_ref_tryget_live(pgmap->ref)) { - dev_WARN(dev, "%s: page mapping is still live!\n", __func__); - percpu_ref_put(pgmap->ref); - } - /* pages are dead and unused, undo the arch mapping */ align_start = res->start & ~(SECTION_SIZE - 1); align_size = ALIGN(res->start + resource_size(res), SECTION_SIZE) @@ -150,7 +146,8 @@ static void devm_memremap_pages_release(void *data) /** * devm_memremap_pages - remap and provide memmap backing for the given resource * @dev: hosting device for @res - * @pgmap: pointer to a struct dev_pgmap + * @pgmap: pointer to a struct dev_pagemap + * @kill: routine to kill @pgmap->ref * * Notes: * 1/ At a minimum the res, ref and type members of @pgmap must be initialized @@ -159,17 +156,15 @@ static void devm_memremap_pages_release(void *data) * 2/ The altmap field may optionally be initialized, in which case altmap_valid * must be set to true * - * 3/ pgmap.ref must be 'live' on entry and 'dead' before devm_memunmap_pages() - * time (or devm release event). The expected order of events is that ref has - * been through percpu_ref_kill() before devm_memremap_pages_release(). The - * wait for the completion of all references being dropped and - * percpu_ref_exit() must occur after devm_memremap_pages_release(). + * 3/ pgmap->ref must be 'live' on entry and will be killed at + * devm_memremap_pages_release() time, or if this routine fails. * * 4/ res is expected to be a host memory range that could feasibly be * treated as a "System RAM" range, i.e. not a device mmio range, but * this is not enforced. */ -void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) +void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap, + void (*kill)(struct percpu_ref *)) { resource_size_t align_start, align_size, align_end; struct vmem_altmap *altmap = pgmap->altmap_valid ? @@ -180,6 +175,9 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) int error, nid, is_ram; struct dev_pagemap *conflict_pgmap; + if (!pgmap->ref || !kill) + return ERR_PTR(-EINVAL); + align_start = res->start & ~(SECTION_SIZE - 1); align_size = ALIGN(res->start + resource_size(res), SECTION_SIZE) - align_start; @@ -205,12 +203,10 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) if (is_ram != REGION_DISJOINT) { WARN_ONCE(1, "%s attempted on %s region %pr\n", __func__, is_ram == REGION_MIXED ? "mixed" : "ram", res); - return ERR_PTR(-ENXIO); + error = -ENXIO; + goto err_init; } - if (!pgmap->ref) - return ERR_PTR(-EINVAL); - pgmap->dev = dev; mutex_lock(&pgmap_lock); @@ -267,7 +263,11 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) percpu_ref_get(pgmap->ref); } - devm_add_action(dev, devm_memremap_pages_release, pgmap); + pgmap->kill = kill; + error = devm_add_action_or_reset(dev, devm_memremap_pages_release, + pgmap); + if (error) + return ERR_PTR(error); return __va(res->start); @@ -278,6 +278,8 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) err_pfn_remap: err_radix: pgmap_radix_release(res, pgoff); + err_init: + kill(pgmap->ref); return ERR_PTR(error); } EXPORT_SYMBOL_GPL(devm_memremap_pages); diff --git a/tools/testing/nvdimm/test/iomap.c b/tools/testing/nvdimm/test/iomap.c index ff9d3a5825e1..ad544e6476a9 100644 --- a/tools/testing/nvdimm/test/iomap.c +++ b/tools/testing/nvdimm/test/iomap.c @@ -104,14 +104,29 @@ void *__wrap_devm_memremap(struct device *dev, resource_size_t offset, } EXPORT_SYMBOL(__wrap_devm_memremap); -void *__wrap_devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) +static void nfit_test_kill(void *_pgmap) +{ + struct dev_pagemap *pgmap = _pgmap; + + pgmap->kill(pgmap->ref); +} + +void *__wrap_devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap, + void (*kill)(struct percpu_ref *)) { resource_size_t offset = pgmap->res.start; struct nfit_test_resource *nfit_res = get_nfit_res(offset); - if (nfit_res) + if (nfit_res) { + int rc; + + pgmap->kill = kill; + rc = devm_add_action_or_reset(dev, nfit_test_kill, pgmap); + if (rc) + return ERR_PTR(rc); return nfit_res->buf + offset - nfit_res->res.start; - return devm_memremap_pages(dev, pgmap); + } + return devm_memremap_pages(dev, pgmap, kill); } EXPORT_SYMBOL(__wrap_devm_memremap_pages); From patchwork Thu Sep 13 02:22:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 10598655 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6855915E2 for ; Thu, 13 Sep 2018 02:34:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 56F312992E for ; Thu, 13 Sep 2018 02:34:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4B03929BB3; Thu, 13 Sep 2018 02:34:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C48672992E for ; Thu, 13 Sep 2018 02:34:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F1F498E0005; Wed, 12 Sep 2018 22:34:06 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id ECDF78E0001; Wed, 12 Sep 2018 22:34:06 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE4228E0005; Wed, 12 Sep 2018 22:34:06 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by kanga.kvack.org (Postfix) with ESMTP id 9F2848E0001 for ; Wed, 12 Sep 2018 22:34:06 -0400 (EDT) Received: by mail-pl1-f199.google.com with SMTP id e8-v6so1934558plt.4 for ; Wed, 12 Sep 2018 19:34:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:from :to:cc:date:message-id:in-reply-to:references:user-agent :mime-version:content-transfer-encoding; bh=SGo2IAwqRk4co9gf20yhkPbTMH5IfgL39BU9td7HbqM=; b=QRhtJjrbYw7L7xNafSn0ZfS0pqrKRsBGfjZoV+4iR0inhQr5gZ21iSnFgOIrJEthn+ Oovafi3kW6CuJOcYMpmbis77um7jl9jtiN5f9M5iauLQhagiqyfsmu/6VmYV6CU0JIrO DM9sdBJxB5ag9B+GBcToBNIMRZLasLKJZUJrz92HRfhoJRExWsV0ABKSg2TjNhj+d9jM slmv056HO1PDuTklG2j5mQMjdGIJkYDjQUR2LMpdRUCAS4aLKkbJD8p0DJadFnT44rM8 ylbnLWCEMW95SR+ITsu0mGiRYOyo/6RR56uxvixwlsmJVrygC7CCCJsg01vRScR5NeTD QMcg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: APzg51A/bQiMSGpoPt8ZEip1n2nZJKwsskjAcnWIOoDsGQCO3IZ5GPB+ yYtbQ/FlbBnLH57LfmZbCFbmXORJV4MHG8EKjtP+m3wxIzR9f1s6GYofYblWaRWE7Z1L7NMX85H rHsFOZ03lgjOC5siM0Xregl8DIytERn3VZiVOfwjJbZ7fEWDCRSjeswa5C0P+x3Wrzg== X-Received: by 2002:a63:fa0c:: with SMTP id y12-v6mr5035541pgh.177.1536806046324; Wed, 12 Sep 2018 19:34:06 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZpIt4WXRRhHlr/Tv9XKhXFcAYQqxhvankrFOUM7pDDdbhpCw/Y4I0YSOzYL3JihOu4wAt7 X-Received: by 2002:a63:fa0c:: with SMTP id y12-v6mr5035511pgh.177.1536806045422; Wed, 12 Sep 2018 19:34:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536806045; cv=none; d=google.com; s=arc-20160816; b=nxOSitXRnJpcXD6/qFKvxlKTPPiGPwemb9cPN/6sibZiYYZEOBOg7UJsXFoaLJpvT2 uSy9LOfiwvNah0JMoN8EmFkD9yvNXWBAR9EwzNlKG6f40c2PR1rHDGUBhgM+EUoT6II/ BOU0d+ouILyHuk5PsB5ncDmb20CsReUc52MZYr/GkNqV+U7b/m4q1sX57kPxX5dnJ3Ni qrqESKdcIS8HlRPuu56OFQkVQMq4qgjGovvqFWaJJyZmXP6SdI3zsjEUKXT8+32uzfVE 7KkEijg7kTxW00/RWeG7s/550zghJxOjhw04UdOKriyVWu3tlfLieyQZseIEGg/N+90l mYxg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:user-agent:references :in-reply-to:message-id:date:cc:to:from:subject; bh=SGo2IAwqRk4co9gf20yhkPbTMH5IfgL39BU9td7HbqM=; b=REPF2OxtiRA2QUr66LaMELINmNTeaYnjDzRGRcqIYKxx6X9tv4JBT0DXQPFafopPOv nid89Yl2p7Hau2Gv8keqsVTSlzxNS04IGNzeCYhQcE6dmyeU0igqRPQZpnwKlr6U0tn8 1RUzKErDosK4BepdQf0YXNdeRNLPB3vjtJhoBwe/OLLpISJxbrWPuXrXAtSEAWYK60hb OufiAam/szFHkIIFSC9gz4IlhZAGALZ2BlwAGefc/CWTcNr6OKtfa/msdjJV8qQaHBjs bWWhp3ZMP7WbmucoAt+/B2vYSYJYinLsf7NuEviukfs9PcS6EsriOBBW5+UO89Ize0aA 2iDQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga18.intel.com (mga18.intel.com. [134.134.136.126]) by mx.google.com with ESMTPS id c126-v6si2721165pfa.130.2018.09.12.19.34.05 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Sep 2018 19:34:05 -0700 (PDT) Received-SPF: pass (google.com: domain of dan.j.williams@intel.com designates 134.134.136.126 as permitted sender) client-ip=134.134.136.126; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Sep 2018 19:34:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.53,367,1531810800"; d="scan'208";a="88310273" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by fmsmga004.fm.intel.com with ESMTP; 12 Sep 2018 19:34:04 -0700 Subject: [PATCH v5 4/7] mm, devm_memremap_pages: Add MEMORY_DEVICE_PRIVATE support From: Dan Williams To: akpm@linux-foundation.org Cc: Christoph Hellwig , =?utf-8?b?SsOpcsO0bWU=?= Glisse , Logan Gunthorpe , Logan Gunthorpe , alexander.h.duyck@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Wed, 12 Sep 2018 19:22:22 -0700 Message-ID: <153680534246.453305.10522027577023444732.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <153680531988.453305.8080706591516037706.stgit@dwillia2-desk3.amr.corp.intel.com> References: <153680531988.453305.8080706591516037706.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-2-gc94f MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP In preparation for consolidating all ZONE_DEVICE enabling via devm_memremap_pages(), teach it how to handle the constraints of MEMORY_DEVICE_PRIVATE ranges. Cc: Christoph Hellwig Cc: "Jérôme Glisse" Reported-by: Logan Gunthorpe Reviewed-by: Logan Gunthorpe Signed-off-by: Dan Williams Acked-by: Christoph Hellwig Reviewed-by: Jérôme Glisse Reported-by: Logan Gunthorpe Reviewed-by: Logan Gunthorpe Signed-off-by: Dan Williams --- kernel/memremap.c | 51 +++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 39 insertions(+), 12 deletions(-) diff --git a/kernel/memremap.c b/kernel/memremap.c index ab5eb570d28d..3234a771e63a 100644 --- a/kernel/memremap.c +++ b/kernel/memremap.c @@ -132,9 +132,15 @@ static void devm_memremap_pages_release(void *data) - align_start; mem_hotplug_begin(); - arch_remove_memory(align_start, align_size, pgmap->altmap_valid ? - &pgmap->altmap : NULL); - kasan_remove_zero_shadow(__va(align_start), align_size); + if (pgmap->type == MEMORY_DEVICE_PRIVATE) { + pfn = align_start >> PAGE_SHIFT; + __remove_pages(page_zone(pfn_to_page(pfn)), pfn, + align_size >> PAGE_SHIFT, NULL); + } else { + arch_remove_memory(align_start, align_size, + pgmap->altmap_valid ? &pgmap->altmap : NULL); + kasan_remove_zero_shadow(__va(align_start), align_size); + } mem_hotplug_done(); untrack_pfn(NULL, PHYS_PFN(align_start), align_size); @@ -234,17 +240,38 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap, goto err_pfn_remap; mem_hotplug_begin(); - error = kasan_add_zero_shadow(__va(align_start), align_size); - if (error) { - mem_hotplug_done(); - goto err_kasan; - } - error = arch_add_memory(nid, align_start, align_size, altmap, false); - if (!error) - move_pfn_range_to_zone(&NODE_DATA(nid)->node_zones[ZONE_DEVICE], - align_start >> PAGE_SHIFT, + /* + * For device private memory we call add_pages() as we only need to + * allocate and initialize struct page for the device memory. More- + * over the device memory is un-accessible thus we do not want to + * create a linear mapping for the memory like arch_add_memory() + * would do. + * + * For all other device memory types, which are accessible by + * the CPU, we do want the linear mapping and thus use + * arch_add_memory(). + */ + if (pgmap->type == MEMORY_DEVICE_PRIVATE) { + error = add_pages(nid, align_start >> PAGE_SHIFT, + align_size >> PAGE_SHIFT, NULL, false); + } else { + struct zone *zone; + + error = kasan_add_zero_shadow(__va(align_start), align_size); + if (error) { + mem_hotplug_done(); + goto err_kasan; + } + + error = arch_add_memory(nid, align_start, align_size, altmap, + false); + zone = &NODE_DATA(nid)->node_zones[ZONE_DEVICE]; + if (!error) + move_pfn_range_to_zone(zone, align_start >> PAGE_SHIFT, align_size >> PAGE_SHIFT, altmap); + } + mem_hotplug_done(); if (error) goto err_add_memory; From patchwork Thu Sep 13 02:22:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 10598653 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6AB1515E2 for ; Thu, 13 Sep 2018 02:34:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 596AB29B3F for ; Thu, 13 Sep 2018 02:34:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4D6CF29C73; Thu, 13 Sep 2018 02:34:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8963B29B3F for ; Thu, 13 Sep 2018 02:34:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9EEEB8E0006; Wed, 12 Sep 2018 22:34:12 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9A0B18E0001; Wed, 12 Sep 2018 22:34:12 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8B4D78E0006; Wed, 12 Sep 2018 22:34:12 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by kanga.kvack.org (Postfix) with ESMTP id 4A7E88E0001 for ; Wed, 12 Sep 2018 22:34:12 -0400 (EDT) Received: by mail-pl1-f199.google.com with SMTP id a8-v6so1927617pla.10 for ; Wed, 12 Sep 2018 19:34:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:from :to:cc:date:message-id:in-reply-to:references:user-agent :mime-version:content-transfer-encoding; bh=28ee2WFabJ9svONt1zpeeGV0LaX08g4VBIZ8/W84zzQ=; b=MoxJ8uRS8FBIoiN3g5koRT3uVDs30l2tKryjCI8ogYDu6OvgX2RbJc3fGqmHJLGsWw KfihPrChv/JwctkdRPRAIpTrk+iO2MKq2wTRhodZli68qzJ5ypWRjmtGM0imECjEaft+ uajm4eqeEE/ljaQh4rFGki1i6DEKSvabr9UZPG1u0Ss7TVlkF4vQN2C7CwhxeJjEKK6L fovtLFQwU/fw3LziGwLfmLmarycwY81ov4NF/siZZuM9lededXh+5RmOm7Oc2D+oUduV NnVdgk6+XPIeQlfavzBJ9XnH3/0Y3cEuGmmjHHwC15Mp1gjm24w1yPFXiQoV65VdhKUK O9YA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: APzg51CCb47mhHALMJm12/TnZ8uFjCfI+AXuEApB6nhPHdVXXzKnQhXV Kgl8gdcNfAC7wThYk8IXzdAziDs34LUN+tjBHJIjxZRK4TG056UiG5KtWq6liv/+/9lMbmUlbNg WShinCVyuaRcFFnG+48ZcDp0M6Q4KBPnGJcMMOP5WwUgAZEyQH7CLyo5YHIlPN/dDsw== X-Received: by 2002:a62:8186:: with SMTP id t128-v6mr5123638pfd.192.1536806051964; Wed, 12 Sep 2018 19:34:11 -0700 (PDT) X-Google-Smtp-Source: ANB0VdbaKRYIt9dBcJX7iL+CWgpCxWsgf4ciIKGKSz1mjYfXhe90SWRxWizLve023NxYvwLP6Q+C X-Received: by 2002:a62:8186:: with SMTP id t128-v6mr5123581pfd.192.1536806050757; Wed, 12 Sep 2018 19:34:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536806050; cv=none; d=google.com; s=arc-20160816; b=LogRFdFyikSTYwy3FXiI/Mh8pBpsHgebK8FZg9BrTLGiDGtsJ6SSC+Hmn9eTNqxaeY 4ZMWNqvLv6wXH8jnxQ9FNGMUwQ9MCCZ+6I4VYzjz3JDorXE9PI58JqJEw/j2B+Wwyqjz xPXUyBc1Cp8BJPnqQ67HBiZ3Cd217zijPmE4+wZHiceEQx5m53O8KLWIuu3pyzYtKbgP 2Ozl6m8koqrLISNGieszm7Vu7toTzKWDG15lrMv4MfMIIxVquh+iODwFuWveu1vVbVPY HtOLu+TlKIxvc+irzWV9OTFH9AP5IjKjTWsvS8t1+juYq+JjimHyNCHlhmQkl3kVZ9oV 6FIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:user-agent:references :in-reply-to:message-id:date:cc:to:from:subject; bh=28ee2WFabJ9svONt1zpeeGV0LaX08g4VBIZ8/W84zzQ=; b=lxMvPkaiJtcVArc93ix9QbOr6TSLPm23PJtFukB2eyrW6THSq3daK+fKm/67DFwmRT v3ugyvdatQqS4NS4A37zodaj8FOB8nFYhVi7/BcFBHeBjWz68MUizjUZ7EU/wgeLgWWt B3UxCw2Ag0XWPFQi2gnMhtgSdnhym36zIyLOgSnuzNbEJoaF3WkA7+x/w6vd1pd3ciow QolfH+QUC/kJA3m4DD/zFStAGSYLqhm8Njhk/b+N07AR91sghJxzh4J4nP/wt0bOxxGy zNikl4G+wwgkyKuM5cxqP6rYBl+B+n/QYyhFyyHjrS8pAxdKtrhbPspel0P0HX4zKUVg +xXw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga14.intel.com (mga14.intel.com. [192.55.52.115]) by mx.google.com with ESMTPS id f10-v6si2745057pfn.85.2018.09.12.19.34.10 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Sep 2018 19:34:10 -0700 (PDT) Received-SPF: pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.115 as permitted sender) client-ip=192.55.52.115; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Sep 2018 19:34:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.53,367,1531810800"; d="scan'208";a="262148447" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by fmsmga005.fm.intel.com with ESMTP; 12 Sep 2018 19:34:09 -0700 Subject: [PATCH v5 5/7] mm, hmm: Use devm semantics for hmm_devmem_{add, remove} From: Dan Williams To: akpm@linux-foundation.org Cc: Christoph Hellwig , =?utf-8?b?SsOpcsO0bWU=?= Glisse , Logan Gunthorpe , alexander.h.duyck@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Wed, 12 Sep 2018 19:22:27 -0700 Message-ID: <153680534781.453305.3660438915028111950.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <153680531988.453305.8080706591516037706.stgit@dwillia2-desk3.amr.corp.intel.com> References: <153680531988.453305.8080706591516037706.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-2-gc94f MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP devm semantics arrange for resources to be torn down when device-driver-probe fails or when device-driver-release completes. Similar to devm_memremap_pages() there is no need to support an explicit remove operation when the users properly adhere to devm semantics. Note that devm_kzalloc() automatically handles allocating node-local memory. Reviewed-by: Christoph Hellwig Cc: "Jérôme Glisse" Cc: Logan Gunthorpe Signed-off-by: Dan Williams Reviewed-by: Jérôme Glisse --- include/linux/hmm.h | 4 -- mm/hmm.c | 127 ++++++++++----------------------------------------- 2 files changed, 25 insertions(+), 106 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 4c92e3ba3e16..5ec8635f602c 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -499,8 +499,7 @@ struct hmm_devmem { * enough and allocate struct page for it. * * The device driver can wrap the hmm_devmem struct inside a private device - * driver struct. The device driver must call hmm_devmem_remove() before the - * device goes away and before freeing the hmm_devmem struct memory. + * driver struct. */ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, struct device *device, @@ -508,7 +507,6 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops, struct device *device, struct resource *res); -void hmm_devmem_remove(struct hmm_devmem *devmem); /* * hmm_devmem_page_set_drvdata - set per-page driver data field diff --git a/mm/hmm.c b/mm/hmm.c index c968e49f7a0c..ec1d9eccf176 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -939,7 +939,6 @@ static void hmm_devmem_ref_exit(void *data) devmem = container_of(ref, struct hmm_devmem, ref); percpu_ref_exit(ref); - devm_remove_action(devmem->device, &hmm_devmem_ref_exit, data); } static void hmm_devmem_ref_kill(void *data) @@ -950,7 +949,6 @@ static void hmm_devmem_ref_kill(void *data) devmem = container_of(ref, struct hmm_devmem, ref); percpu_ref_kill(ref); wait_for_completion(&devmem->completion); - devm_remove_action(devmem->device, &hmm_devmem_ref_kill, data); } static int hmm_devmem_fault(struct vm_area_struct *vma, @@ -988,7 +986,7 @@ static void hmm_devmem_radix_release(struct resource *resource) mutex_unlock(&hmm_devmem_lock); } -static void hmm_devmem_release(struct device *dev, void *data) +static void hmm_devmem_release(void *data) { struct hmm_devmem *devmem = data; struct resource *resource = devmem->resource; @@ -996,11 +994,6 @@ static void hmm_devmem_release(struct device *dev, void *data) struct zone *zone; struct page *page; - if (percpu_ref_tryget_live(&devmem->ref)) { - dev_WARN(dev, "%s: page mapping is still live!\n", __func__); - percpu_ref_put(&devmem->ref); - } - /* pages are dead and unused, undo the arch mapping */ start_pfn = (resource->start & ~(PA_SECTION_SIZE - 1)) >> PAGE_SHIFT; npages = ALIGN(resource_size(resource), PA_SECTION_SIZE) >> PAGE_SHIFT; @@ -1124,19 +1117,6 @@ static int hmm_devmem_pages_create(struct hmm_devmem *devmem) return ret; } -static int hmm_devmem_match(struct device *dev, void *data, void *match_data) -{ - struct hmm_devmem *devmem = data; - - return devmem->resource == match_data; -} - -static void hmm_devmem_pages_remove(struct hmm_devmem *devmem) -{ - devres_release(devmem->device, &hmm_devmem_release, - &hmm_devmem_match, devmem->resource); -} - /* * hmm_devmem_add() - hotplug ZONE_DEVICE memory for device memory * @@ -1164,8 +1144,7 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, dev_pagemap_get_ops(); - devmem = devres_alloc_node(&hmm_devmem_release, sizeof(*devmem), - GFP_KERNEL, dev_to_node(device)); + devmem = devm_kzalloc(device, sizeof(*devmem), GFP_KERNEL); if (!devmem) return ERR_PTR(-ENOMEM); @@ -1179,11 +1158,11 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, ret = percpu_ref_init(&devmem->ref, &hmm_devmem_ref_release, 0, GFP_KERNEL); if (ret) - goto error_percpu_ref; + return ERR_PTR(ret); - ret = devm_add_action(device, hmm_devmem_ref_exit, &devmem->ref); + ret = devm_add_action_or_reset(device, hmm_devmem_ref_exit, &devmem->ref); if (ret) - goto error_devm_add_action; + return ERR_PTR(ret); size = ALIGN(size, PA_SECTION_SIZE); addr = min((unsigned long)iomem_resource.end, @@ -1203,16 +1182,12 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, devmem->resource = devm_request_mem_region(device, addr, size, dev_name(device)); - if (!devmem->resource) { - ret = -ENOMEM; - goto error_no_resource; - } + if (!devmem->resource) + return ERR_PTR(-ENOMEM); break; } - if (!devmem->resource) { - ret = -ERANGE; - goto error_no_resource; - } + if (!devmem->resource) + return ERR_PTR(-ERANGE); devmem->resource->desc = IORES_DESC_DEVICE_PRIVATE_MEMORY; devmem->pfn_first = devmem->resource->start >> PAGE_SHIFT; @@ -1221,28 +1196,13 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, ret = hmm_devmem_pages_create(devmem); if (ret) - goto error_pages; - - devres_add(device, devmem); + return ERR_PTR(ret); - ret = devm_add_action(device, hmm_devmem_ref_kill, &devmem->ref); - if (ret) { - hmm_devmem_remove(devmem); + ret = devm_add_action_or_reset(device, hmm_devmem_release, devmem); + if (ret) return ERR_PTR(ret); - } return devmem; - -error_pages: - devm_release_mem_region(device, devmem->resource->start, - resource_size(devmem->resource)); -error_no_resource: -error_devm_add_action: - hmm_devmem_ref_kill(&devmem->ref); - hmm_devmem_ref_exit(&devmem->ref); -error_percpu_ref: - devres_free(devmem); - return ERR_PTR(ret); } EXPORT_SYMBOL(hmm_devmem_add); @@ -1258,8 +1218,7 @@ struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops, dev_pagemap_get_ops(); - devmem = devres_alloc_node(&hmm_devmem_release, sizeof(*devmem), - GFP_KERNEL, dev_to_node(device)); + devmem = devm_kzalloc(device, sizeof(*devmem), GFP_KERNEL); if (!devmem) return ERR_PTR(-ENOMEM); @@ -1273,12 +1232,12 @@ struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops, ret = percpu_ref_init(&devmem->ref, &hmm_devmem_ref_release, 0, GFP_KERNEL); if (ret) - goto error_percpu_ref; + return ERR_PTR(ret); - ret = devm_add_action(device, hmm_devmem_ref_exit, &devmem->ref); + ret = devm_add_action_or_reset(device, hmm_devmem_ref_exit, + &devmem->ref); if (ret) - goto error_devm_add_action; - + return ERR_PTR(ret); devmem->pfn_first = devmem->resource->start >> PAGE_SHIFT; devmem->pfn_last = devmem->pfn_first + @@ -1286,59 +1245,21 @@ struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops, ret = hmm_devmem_pages_create(devmem); if (ret) - goto error_devm_add_action; + return ERR_PTR(ret); - devres_add(device, devmem); + ret = devm_add_action_or_reset(device, hmm_devmem_release, devmem); + if (ret) + return ERR_PTR(ret); - ret = devm_add_action(device, hmm_devmem_ref_kill, &devmem->ref); - if (ret) { - hmm_devmem_remove(devmem); + ret = devm_add_action_or_reset(device, hmm_devmem_ref_kill, + &devmem->ref); + if (ret) return ERR_PTR(ret); - } return devmem; - -error_devm_add_action: - hmm_devmem_ref_kill(&devmem->ref); - hmm_devmem_ref_exit(&devmem->ref); -error_percpu_ref: - devres_free(devmem); - return ERR_PTR(ret); } EXPORT_SYMBOL(hmm_devmem_add_resource); -/* - * hmm_devmem_remove() - remove device memory (kill and free ZONE_DEVICE) - * - * @devmem: hmm_devmem struct use to track and manage the ZONE_DEVICE memory - * - * This will hot-unplug memory that was hotplugged by hmm_devmem_add on behalf - * of the device driver. It will free struct page and remove the resource that - * reserved the physical address range for this device memory. - */ -void hmm_devmem_remove(struct hmm_devmem *devmem) -{ - resource_size_t start, size; - struct device *device; - bool cdm = false; - - if (!devmem) - return; - - device = devmem->device; - start = devmem->resource->start; - size = resource_size(devmem->resource); - - cdm = devmem->resource->desc == IORES_DESC_DEVICE_PUBLIC_MEMORY; - hmm_devmem_ref_kill(&devmem->ref); - hmm_devmem_ref_exit(&devmem->ref); - hmm_devmem_pages_remove(devmem); - - if (!cdm) - devm_release_mem_region(device, start, size); -} -EXPORT_SYMBOL(hmm_devmem_remove); - /* * A device driver that wants to handle multiple devices memory through a * single fake device can use hmm_device to do so. This is purely a helper From patchwork Thu Sep 13 02:22:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 10598657 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 59A2115A7 for ; Thu, 13 Sep 2018 02:34:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 47C972992E for ; Thu, 13 Sep 2018 02:34:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3C25029BB3; Thu, 13 Sep 2018 02:34:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0FF532992E for ; Thu, 13 Sep 2018 02:34:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 003A58E0007; Wed, 12 Sep 2018 22:34:18 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EF5328E0001; Wed, 12 Sep 2018 22:34:17 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E0DD98E0007; Wed, 12 Sep 2018 22:34:17 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) by kanga.kvack.org (Postfix) with ESMTP id 9F0518E0001 for ; Wed, 12 Sep 2018 22:34:17 -0400 (EDT) Received: by mail-pg1-f199.google.com with SMTP id 186-v6so1803577pgc.12 for ; Wed, 12 Sep 2018 19:34:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:from :to:cc:date:message-id:in-reply-to:references:user-agent :mime-version:content-transfer-encoding; bh=8vBabcFeWzySxllRlhgDgapaCJeRkV0ENwfOCbeSk3c=; b=r9fXsqF5oNio+V8VCkAaDmBKk29NIjrQynA3CjvqpQXorjum4MwaWkF/wbsKAdqCVN i3kEzdlIoSHYnPhUcpb/gmlJEv7x8OvxSKUwyGpwBV10DJyBPSTfKXmBTkaLlZOKe3Xs imeS+8q5XT3kmAwtzK4cvD47XGmHEk4Pi18X85aWDvuAnCnwOqfTe7feaQ6EBIr3hGJi 4cCbyKdEs+s6PhCJuGm87ogqF7x14RJrGdhjCVbplHqxiuBhAviVHcBj309Z9WiITEs6 Hv91JNYFmOliJsO5JqQclzUWVEVxcrM0o1yHX9U36AyBQh5cRG2NnPj+XZpIBeI5CZ+Z XYNQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.43 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: APzg51Bup6WKmQFzK1suIHyIh+l2O1z2mHzi8FPlVyH2a/3ZVG0DKdkC kv0EpIr1g/dkL/Upxdp7x5JKXjbnKAQasRsxF3xiX3MIyNxR+eb+KEKQvcUJTkVtvro0M/kxzZS HoXPZeRV5FRvMp3zy6+zE5NGBrTEEXyt1e5Rh/vXoPz4MTAvsebIndL3d1ocdgz3xaA== X-Received: by 2002:a63:ec43:: with SMTP id r3-v6mr4957246pgj.295.1536806057309; Wed, 12 Sep 2018 19:34:17 -0700 (PDT) X-Google-Smtp-Source: ANB0Vdbj7cawuNz5xK7tWzr6jW3RZCGbrT8AaxzGN5lUNz91wYfwwHK9vZ1Z6jUDQSC4vF8yNn8A X-Received: by 2002:a63:ec43:: with SMTP id r3-v6mr4957191pgj.295.1536806056095; Wed, 12 Sep 2018 19:34:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536806056; cv=none; d=google.com; s=arc-20160816; b=eUNIPfXO2cYmxYHx5RW9Ritz8Z6KPZDApngjQMyPGklfK+yX56M+NkrGHE/Ks9vcfy v9/6dPGxJY4zx235huHg9ZFtJkmr0lVwKqXM1JuWDZnt/FBJ5K1CbBIJWZ/26Ufd2qaV 8swbZfZn1RV05DsqrXrSJJ/yzvv85DATZ3Ad4mCBRtUKVd5SUXa2c4gaJLQnCK/5/HUu IuMeg7h75im8kRPpkoyoXtmdHY4QiiiDDWOLYaEeCWA5Le53ox+NA8R9bO58zvTbBn+e 8z313vfr8oZ/zCMU5KGpLJN+OfDQD6TxZLsxZfM89m37mjiTEIPqUXinoOIPxEu9G0lu aqXg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:user-agent:references :in-reply-to:message-id:date:cc:to:from:subject; bh=8vBabcFeWzySxllRlhgDgapaCJeRkV0ENwfOCbeSk3c=; b=JxIjoNwCAFgFN3KJ5F8KsLpCrM/dUi5Qe5HJIAuK4HayMP7hCa3DNCykewCMhfQQ1r RXInFDoFIUyThNe/809oXHymy2dd8dJFX9t16NuwRfSSc82D6ut8S5G54yGB5FUJ4aPf gNodVgL4xs9kbTARkCj6u0qvAbiE7OOxT4OxX4BF5uZAybxjReV4ubcx/wMi3Rh8vObY JYNjzNGhRu8oQP+5j+dkWfJtrDxSoVCPcgyuwecVL4V9iVyOk5C5+IBFl48hOHKoHuTw v4ZbacZpbrV97fgKYlCrQyNntH0MvilBxmdVRmkKAB1UFmVMj3G3lk/8PLpfo6pQfvKA Z1Sw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.43 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga05.intel.com (mga05.intel.com. [192.55.52.43]) by mx.google.com with ESMTPS id k17-v6si2820168pfj.321.2018.09.12.19.34.15 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Sep 2018 19:34:16 -0700 (PDT) Received-SPF: pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.43 as permitted sender) client-ip=192.55.52.43; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.43 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Sep 2018 19:34:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.53,367,1531810800"; d="scan'208";a="90007344" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by orsmga001.jf.intel.com with ESMTP; 12 Sep 2018 19:34:15 -0700 Subject: [PATCH v5 6/7] mm, hmm: Replace hmm_devmem_pages_create() with devm_memremap_pages() From: Dan Williams To: akpm@linux-foundation.org Cc: Christoph Hellwig , =?utf-8?b?SsOpcsO0bWU=?= Glisse , Logan Gunthorpe , alexander.h.duyck@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Wed, 12 Sep 2018 19:22:33 -0700 Message-ID: <153680535314.453305.11205770267271657025.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <153680531988.453305.8080706591516037706.stgit@dwillia2-desk3.amr.corp.intel.com> References: <153680531988.453305.8080706591516037706.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-2-gc94f MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Commit e8d513483300 "memremap: change devm_memremap_pages interface to use struct dev_pagemap" refactored devm_memremap_pages() to allow a dev_pagemap instance to be supplied. Passing in a dev_pagemap interface simplifies the design of pgmap type drivers in that they can rely on container_of() to lookup any private data associated with the given dev_pagemap instance. In addition to the cleanups this also gives hmm users multi-order-radix improvements that arrived with commit ab1b597ee0e4 "mm, devm_memremap_pages: use multi-order radix for ZONE_DEVICE lookups" As part of the conversion to the devm_memremap_pages() method of handling the percpu_ref relative to when pages are put, the percpu_ref completion needs to move to hmm_devmem_ref_exit(). See commit 71389703839e ("mm, zone_device: Replace {get, put}_zone_device_page...") for details. Reviewed-by: Christoph Hellwig Cc: "Jérôme Glisse" Cc: Logan Gunthorpe Signed-off-by: Dan Williams Reviewed-by: Jérôme Glisse Acked-by: Balbir Singh --- mm/hmm.c | 194 ++++++++------------------------------------------------------ 1 file changed, 26 insertions(+), 168 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index ec1d9eccf176..c6cab5205b99 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -938,17 +938,16 @@ static void hmm_devmem_ref_exit(void *data) struct hmm_devmem *devmem; devmem = container_of(ref, struct hmm_devmem, ref); + wait_for_completion(&devmem->completion); percpu_ref_exit(ref); } -static void hmm_devmem_ref_kill(void *data) +static void hmm_devmem_ref_kill(struct percpu_ref *ref) { - struct percpu_ref *ref = data; struct hmm_devmem *devmem; devmem = container_of(ref, struct hmm_devmem, ref); percpu_ref_kill(ref); - wait_for_completion(&devmem->completion); } static int hmm_devmem_fault(struct vm_area_struct *vma, @@ -971,152 +970,6 @@ static void hmm_devmem_free(struct page *page, void *data) devmem->ops->free(devmem, page); } -static DEFINE_MUTEX(hmm_devmem_lock); -static RADIX_TREE(hmm_devmem_radix, GFP_KERNEL); - -static void hmm_devmem_radix_release(struct resource *resource) -{ - resource_size_t key; - - mutex_lock(&hmm_devmem_lock); - for (key = resource->start; - key <= resource->end; - key += PA_SECTION_SIZE) - radix_tree_delete(&hmm_devmem_radix, key >> PA_SECTION_SHIFT); - mutex_unlock(&hmm_devmem_lock); -} - -static void hmm_devmem_release(void *data) -{ - struct hmm_devmem *devmem = data; - struct resource *resource = devmem->resource; - unsigned long start_pfn, npages; - struct zone *zone; - struct page *page; - - /* pages are dead and unused, undo the arch mapping */ - start_pfn = (resource->start & ~(PA_SECTION_SIZE - 1)) >> PAGE_SHIFT; - npages = ALIGN(resource_size(resource), PA_SECTION_SIZE) >> PAGE_SHIFT; - - page = pfn_to_page(start_pfn); - zone = page_zone(page); - - mem_hotplug_begin(); - if (resource->desc == IORES_DESC_DEVICE_PRIVATE_MEMORY) - __remove_pages(zone, start_pfn, npages, NULL); - else - arch_remove_memory(start_pfn << PAGE_SHIFT, - npages << PAGE_SHIFT, NULL); - mem_hotplug_done(); - - hmm_devmem_radix_release(resource); -} - -static int hmm_devmem_pages_create(struct hmm_devmem *devmem) -{ - resource_size_t key, align_start, align_size, align_end; - struct device *device = devmem->device; - int ret, nid, is_ram; - unsigned long pfn; - - align_start = devmem->resource->start & ~(PA_SECTION_SIZE - 1); - align_size = ALIGN(devmem->resource->start + - resource_size(devmem->resource), - PA_SECTION_SIZE) - align_start; - - is_ram = region_intersects(align_start, align_size, - IORESOURCE_SYSTEM_RAM, - IORES_DESC_NONE); - if (is_ram == REGION_MIXED) { - WARN_ONCE(1, "%s attempted on mixed region %pr\n", - __func__, devmem->resource); - return -ENXIO; - } - if (is_ram == REGION_INTERSECTS) - return -ENXIO; - - if (devmem->resource->desc == IORES_DESC_DEVICE_PUBLIC_MEMORY) - devmem->pagemap.type = MEMORY_DEVICE_PUBLIC; - else - devmem->pagemap.type = MEMORY_DEVICE_PRIVATE; - - devmem->pagemap.res = *devmem->resource; - devmem->pagemap.page_fault = hmm_devmem_fault; - devmem->pagemap.page_free = hmm_devmem_free; - devmem->pagemap.dev = devmem->device; - devmem->pagemap.ref = &devmem->ref; - devmem->pagemap.data = devmem; - - mutex_lock(&hmm_devmem_lock); - align_end = align_start + align_size - 1; - for (key = align_start; key <= align_end; key += PA_SECTION_SIZE) { - struct hmm_devmem *dup; - - dup = radix_tree_lookup(&hmm_devmem_radix, - key >> PA_SECTION_SHIFT); - if (dup) { - dev_err(device, "%s: collides with mapping for %s\n", - __func__, dev_name(dup->device)); - mutex_unlock(&hmm_devmem_lock); - ret = -EBUSY; - goto error; - } - ret = radix_tree_insert(&hmm_devmem_radix, - key >> PA_SECTION_SHIFT, - devmem); - if (ret) { - dev_err(device, "%s: failed: %d\n", __func__, ret); - mutex_unlock(&hmm_devmem_lock); - goto error_radix; - } - } - mutex_unlock(&hmm_devmem_lock); - - nid = dev_to_node(device); - if (nid < 0) - nid = numa_mem_id(); - - mem_hotplug_begin(); - /* - * For device private memory we call add_pages() as we only need to - * allocate and initialize struct page for the device memory. More- - * over the device memory is un-accessible thus we do not want to - * create a linear mapping for the memory like arch_add_memory() - * would do. - * - * For device public memory, which is accesible by the CPU, we do - * want the linear mapping and thus use arch_add_memory(). - */ - if (devmem->pagemap.type == MEMORY_DEVICE_PUBLIC) - ret = arch_add_memory(nid, align_start, align_size, NULL, - false); - else - ret = add_pages(nid, align_start >> PAGE_SHIFT, - align_size >> PAGE_SHIFT, NULL, false); - if (ret) { - mem_hotplug_done(); - goto error_add_memory; - } - move_pfn_range_to_zone(&NODE_DATA(nid)->node_zones[ZONE_DEVICE], - align_start >> PAGE_SHIFT, - align_size >> PAGE_SHIFT, NULL); - mem_hotplug_done(); - - for (pfn = devmem->pfn_first; pfn < devmem->pfn_last; pfn++) { - struct page *page = pfn_to_page(pfn); - - page->pgmap = &devmem->pagemap; - } - return 0; - -error_add_memory: - untrack_pfn(NULL, PHYS_PFN(align_start), align_size); -error_radix: - hmm_devmem_radix_release(devmem->resource); -error: - return ret; -} - /* * hmm_devmem_add() - hotplug ZONE_DEVICE memory for device memory * @@ -1140,6 +993,7 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, { struct hmm_devmem *devmem; resource_size_t addr; + void *result; int ret; dev_pagemap_get_ops(); @@ -1194,14 +1048,18 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, devmem->pfn_last = devmem->pfn_first + (resource_size(devmem->resource) >> PAGE_SHIFT); - ret = hmm_devmem_pages_create(devmem); - if (ret) - return ERR_PTR(ret); - - ret = devm_add_action_or_reset(device, hmm_devmem_release, devmem); - if (ret) - return ERR_PTR(ret); + devmem->pagemap.type = MEMORY_DEVICE_PRIVATE; + devmem->pagemap.res = *devmem->resource; + devmem->pagemap.page_fault = hmm_devmem_fault; + devmem->pagemap.page_free = hmm_devmem_free; + devmem->pagemap.altmap_valid = false; + devmem->pagemap.ref = &devmem->ref; + devmem->pagemap.data = devmem; + result = devm_memremap_pages(devmem->device, &devmem->pagemap, + hmm_devmem_ref_kill); + if (IS_ERR(result)) + return result; return devmem; } EXPORT_SYMBOL(hmm_devmem_add); @@ -1211,6 +1069,7 @@ struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops, struct resource *res) { struct hmm_devmem *devmem; + void *result; int ret; if (res->desc != IORES_DESC_DEVICE_PUBLIC_MEMORY) @@ -1243,19 +1102,18 @@ struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops, devmem->pfn_last = devmem->pfn_first + (resource_size(devmem->resource) >> PAGE_SHIFT); - ret = hmm_devmem_pages_create(devmem); - if (ret) - return ERR_PTR(ret); - - ret = devm_add_action_or_reset(device, hmm_devmem_release, devmem); - if (ret) - return ERR_PTR(ret); - - ret = devm_add_action_or_reset(device, hmm_devmem_ref_kill, - &devmem->ref); - if (ret) - return ERR_PTR(ret); + devmem->pagemap.type = MEMORY_DEVICE_PUBLIC; + devmem->pagemap.res = *devmem->resource; + devmem->pagemap.page_fault = hmm_devmem_fault; + devmem->pagemap.page_free = hmm_devmem_free; + devmem->pagemap.altmap_valid = false; + devmem->pagemap.ref = &devmem->ref; + devmem->pagemap.data = devmem; + result = devm_memremap_pages(devmem->device, &devmem->pagemap, + hmm_devmem_ref_kill); + if (IS_ERR(result)) + return result; return devmem; } EXPORT_SYMBOL(hmm_devmem_add_resource); From patchwork Thu Sep 13 02:22:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 10598659 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9EC2815E2 for ; Thu, 13 Sep 2018 02:34:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8F90C2992E for ; Thu, 13 Sep 2018 02:34:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8348729BB3; Thu, 13 Sep 2018 02:34:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ED6D82992E for ; Thu, 13 Sep 2018 02:34:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E364B8E0008; Wed, 12 Sep 2018 22:34:22 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D94C88E0001; Wed, 12 Sep 2018 22:34:22 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CAD138E0008; Wed, 12 Sep 2018 22:34:22 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by kanga.kvack.org (Postfix) with ESMTP id 8CDC98E0001 for ; Wed, 12 Sep 2018 22:34:22 -0400 (EDT) Received: by mail-pf1-f199.google.com with SMTP id x19-v6so2050107pfh.15 for ; Wed, 12 Sep 2018 19:34:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:from :to:cc:date:message-id:in-reply-to:references:user-agent :mime-version:content-transfer-encoding; bh=WPf7kXWTrpFn60hijjeg30N+TtYAAZa1Kffgvy2CX5I=; b=T7im0ZJXtuZ4ydCe2aPGE438bhEDhIdxhwrQiqcgOLnuj1uFkE3kXYXP0kNtqZ0k/k YrCOZcYiuk6+7024QBTbV3HPJ45Hvps31/gxqR8Xb0N1pG+48qEDMjqmpO2Qdi8gcIp9 kliOUVsXgBPI6idyPBVlyAbYC68sY2/Lv5FIkQvpqBKXJweqWSvO8mSeI2mnUyXR2mF5 P93mW2M8A2xMPrueam5USGp1468Mbz9756pAzDMcgM2oFIJ8uhXjByLwejjO8IyghDvs PTfZiTmGkuaLdbNbjyP/WlDyl7Sy1MRAuidLSGVoV75SmoVZXB5UJPkh4ZGxWT4V3t7F qQkQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.151 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: APzg51A23CCjlqEVVYD/7VajqrMw8cBZYvx5FbbAEUUjXIh0YgZRis7p zMKokWwou0TphgDsksdy/ZGxvlSGiVkbePO+l9CuMgV7SBKE1ENpt2DBNDTTuP1D++CI1o61IMS YgiTanZFMzaACVNVDAdDbFXUVALjBjSNAge5VE224mgsuzm9YJctnaBQ4NiOzdLU56w== X-Received: by 2002:a62:2285:: with SMTP id p5-v6mr5185737pfj.53.1536806062253; Wed, 12 Sep 2018 19:34:22 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZ6VQiXUA76w9H/d/3u2/yJtipIlnbXQBMo5lww1q8fOW95Ofg/Qqq+qc37vr5xV32YTOv3 X-Received: by 2002:a62:2285:: with SMTP id p5-v6mr5185687pfj.53.1536806061267; Wed, 12 Sep 2018 19:34:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536806061; cv=none; d=google.com; s=arc-20160816; b=CsUm0UNfBmunFRFOzorp3cUvM7e6Cm+iqYS8q5TWg2Axol64vBx6xz4INtoR8NBHFN 7/g9EMpfscolTwhHK+QBv8qbCt6b+OMqAxtrqgjE4XuKBYQaMF+oQ9be+zIRj1cNcFzd DhYJ3Af8OJwTA6/YZP7EzwBCiWhm+JEv3BtJbmZKYWgoYqm2elfId1Au6GNu64ylEK5M YfQhzT+5VrhvtUcoapqELDRLvSzNO9lNdk1JVGiiAxliK2QMvYWgny1jsWQy3+cgzdYy qIBbbvEvTtafq0knQXTVAv9/IBYnNSHCcrv7GBKmV4fgy9WMufCBZB56FdWL5kgiQlP1 tjag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:user-agent:references :in-reply-to:message-id:date:cc:to:from:subject; bh=WPf7kXWTrpFn60hijjeg30N+TtYAAZa1Kffgvy2CX5I=; b=zeLzYXxMre1jXlt2qMpTty8PQP/EiAt81QJnO7G0sLme5ru41JpLI9bEoqkba2GDqJ gc08cHKofBdTNnwAcqtJX3r50futiQPCYA8v97DlgwVI+G2wSvWzON1iyO821xfaTMUo iIfoM8tB48kDJ4E6xyEDVfLsle1Hr5zGsKkc/xsaQfSlO61N2xdYUJ6wfnA5n+lrKVvj 0P3Bcs3sacgULRYYW4at5zt4g88lgHCycE35H8RiBV6MxWVHtPcIPoVqzrybGcY1eNv5 gjJKoYkrsmyPckpHs8YH3wDlLQle7kOTGXH28x9yLYlX5l7vvSVKvSh6PYmJWkDZ6Yxz Sn/g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.151 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga17.intel.com (mga17.intel.com. [192.55.52.151]) by mx.google.com with ESMTPS id z22-v6si2770634plo.219.2018.09.12.19.34.20 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Sep 2018 19:34:21 -0700 (PDT) Received-SPF: pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.151 as permitted sender) client-ip=192.55.52.151; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.151 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Sep 2018 19:34:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.53,367,1531810800"; d="scan'208";a="91259184" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by orsmga002.jf.intel.com with ESMTP; 12 Sep 2018 19:34:20 -0700 Subject: [PATCH v5 7/7] mm, hmm: Mark hmm_devmem_{add, add_resource} EXPORT_SYMBOL_GPL From: Dan Williams To: akpm@linux-foundation.org Cc: =?utf-8?b?SsOpcsO0bWU=?= Glisse , Logan Gunthorpe , Christoph Hellwig , alexander.h.duyck@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Wed, 12 Sep 2018 19:22:38 -0700 Message-ID: <153680535833.453305.11707396784697533207.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <153680531988.453305.8080706591516037706.stgit@dwillia2-desk3.amr.corp.intel.com> References: <153680531988.453305.8080706591516037706.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-2-gc94f MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The routines hmm_devmem_add(), and hmm_devmem_add_resource() duplicated devm_memremap_pages() and are now simple now wrappers around the core facility to inject a dev_pagemap instance into the global pgmap_radix and hook page-idle events. The devm_memremap_pages() interface is base infrastructure for HMM. HMM has more and deeper ties into the kernel memory management implementation than base ZONE_DEVICE which is itself a EXPORT_SYMBOL_GPL facility. Originally, the HMM page structure creation routines copied the devm_memremap_pages() code and reused ZONE_DEVICE. A cleanup to unify the implementations was discussed during the initial review: http://lkml.iu.edu/hypermail/linux/kernel/1701.2/00812.html Recent work to extend devm_memremap_pages() for the peer-to-peer-DMA facility enabled this cleanup to move forward. In addition to the integration with devm_memremap_pages() HMM depends on other GPL-only symbols: mmu_notifier_unregister_no_release percpu_ref region_intersects __class_create It goes further to consume / indirectly expose functionality that is not exported to any other driver: alloc_pages_vma walk_page_range HMM is derived from devm_memremap_pages(), and extends deep core-kernel fundamentals. Similar to devm_memremap_pages(), mark its entry points EXPORT_SYMBOL_GPL(). Cc: "Jérôme Glisse" Cc: Logan Gunthorpe Reviewed-by: Christoph Hellwig Signed-off-by: Dan Williams --- mm/hmm.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index c6cab5205b99..1d5a09087275 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -1062,7 +1062,7 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, return result; return devmem; } -EXPORT_SYMBOL(hmm_devmem_add); +EXPORT_SYMBOL_GPL(hmm_devmem_add); struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops, struct device *device, @@ -1116,7 +1116,7 @@ struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops, return result; return devmem; } -EXPORT_SYMBOL(hmm_devmem_add_resource); +EXPORT_SYMBOL_GPL(hmm_devmem_add_resource); /* * A device driver that wants to handle multiple devices memory through a