From patchwork Tue Sep 25 06:15:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 10613421 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0746916B1 for ; Tue, 25 Sep 2018 06:26:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EE8402985A for ; Tue, 25 Sep 2018 06:26:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E05B329868; Tue, 25 Sep 2018 06:26:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C45D82985A for ; Tue, 25 Sep 2018 06:26:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A858A8E0060; Tue, 25 Sep 2018 02:26:49 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A34DE8E0041; Tue, 25 Sep 2018 02:26:49 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 923C98E0060; Tue, 25 Sep 2018 02:26:49 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by kanga.kvack.org (Postfix) with ESMTP id 531EC8E0041 for ; Tue, 25 Sep 2018 02:26:49 -0400 (EDT) Received: by mail-pf1-f199.google.com with SMTP id p8-v6so2974380pfn.23 for ; Mon, 24 Sep 2018 23:26:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:from :to:cc:date:message-id:in-reply-to:references:user-agent :mime-version:content-transfer-encoding; bh=hzEkEsZf6Xlz0+pmaecWLIXAqfAQ3kSgkq1h7aEiPog=; b=RJj9hNjQBWfTLOetYBiUo/rFAKvVQ7sTIYQUPiHLpnG1GBTXZan4I4IZuVXVpYASsa +pHF1s20KEHrtpUqqyChyN6t1E639hrlV+xrMBmkc8MwmV0/JOHWefiM63nJ2klZyup/ HnVUSiI1NLXvAwdfpNO9532c4LtnLkgsh+eCWNM8BgjBFayjS/W1jPbVNL6ZieSQ+Qc9 3kF/yahHc10VJXkINuzFs8i7+ktgm2Qixz5TZAE6rtQe7wWQDXVSh3QjvIa9JxQgmBnr pSO/5w3zZiTRj9cdEv1OybXdStz/Pt6L0cS1LiHjgI8LsKO4/hT0aP+4DUFRWz+vsqDL tFag== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 134.134.136.31 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: ABuFfojsAR1h3kDbY07euSEi8Y+49f0RoW2+PsJliTNQCiKxD7iJ6Fck LhsHKS1RW3uFE7djNLCoiW2JiTeWHtTrtT9leDAGXhzq0BkK+bTsPoEnnBNT/CakF737ypcpJo3 Y29RvRRLembWLaWHUl8VQY8CJXmpw7GpFLjer+2Eluq09TVjWQ5DXQuLenx1tKDEXcQ== X-Received: by 2002:a17:902:47c2:: with SMTP id d2-v6mr2004347plh.317.1537856808999; Mon, 24 Sep 2018 23:26:48 -0700 (PDT) X-Google-Smtp-Source: ACcGV62E72G3V25/S1Zr40XqDyn5rXrjIWGqiDp+zHAU2tlq5k5A0B0K9tcC/LltJlWa9T38Npgk X-Received: by 2002:a17:902:47c2:: with SMTP id d2-v6mr2004291plh.317.1537856808046; Mon, 24 Sep 2018 23:26:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537856808; cv=none; d=google.com; s=arc-20160816; b=W8I6Xm3WpslEfiIEsCqMYVyAyMbNK48LqVhmGnmeAUk+mnDNHoEwbvwSmpm4L68KVu 0vZHQv4Yt8NOrLK2mUrUSsDukyfLTju2LTW9hUH118yFiNpqX9UMy3A51hmvpg1QBzM4 uIubb2LawJxSbf3KDJcNUkaDQrd1RNjS2vwHJtxMN9BVf7TqDTecV/Xd07BAc+YoNndc JfHFGhh34MG80j0kSBOD4reiOhtu620SfbYE+0mNnZTPT2048VMGDgtmhxiikc9aLqHD VA6eQ1so2q/14YCfCJrcs5BXBS8fXAZ3l6jdzgP6WSVmcAhRiZXWCdh1x2gxuRqSY9dQ c62g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:user-agent:references :in-reply-to:message-id:date:cc:to:from:subject; bh=hzEkEsZf6Xlz0+pmaecWLIXAqfAQ3kSgkq1h7aEiPog=; b=rj0Qm3K9e16ZuE4am74SA9VfXpYCI9qLHqzY/4zPVZx4S0IkslSzV4i4SPcLuQMnZ5 TjyXkyCzhNMzfj7xG7qxgJynBQEIFU1IXLdnh09h/qwpLcKzp87uWXUKm+zFXqxPevvP 55k1bRolZTiIg+pm6KDlw63NlU074iPcaBK/76lGV0wYLTcUHELCoDkavN8mOqkstmrI Pqex/6XjqYspCkTvW0UnJYUPFF79BBoqQVWf4LTcwLKAm/4MUutZ8aL1qyZ52h/zgbIz Jv6VJiE5Ijk+sUvsKkrRJdjfkqwBaKTcXrVXbbx4oLIiRN02DRCHfWluEn7I2W3DzuBW /PWQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 134.134.136.31 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga06.intel.com (mga06.intel.com. [134.134.136.31]) by mx.google.com with ESMTPS id k24-v6si1567936pgj.28.2018.09.24.23.26.47 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 24 Sep 2018 23:26:48 -0700 (PDT) Received-SPF: pass (google.com: domain of dan.j.williams@intel.com designates 134.134.136.31 as permitted sender) client-ip=134.134.136.31; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 134.134.136.31 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 24 Sep 2018 23:26:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,301,1534834800"; d="scan'208";a="75685191" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by orsmga007.jf.intel.com with ESMTP; 24 Sep 2018 23:26:47 -0700 Subject: [PATCH v6 1/7] mm, devm_memremap_pages: Mark devm_memremap_pages() EXPORT_SYMBOL_GPL From: Dan Williams To: akpm@linux-foundation.org Cc: Michal Hocko , =?utf-8?b?SsOpcsO0bWU=?= Glisse , Christoph Hellwig , alexander.h.duyck@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Mon, 24 Sep 2018 23:15:00 -0700 Message-ID: <153785610001.283091.17732419819752424489.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <153785609460.283091.17422092801700439095.stgit@dwillia2-desk3.amr.corp.intel.com> References: <153785609460.283091.17422092801700439095.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-2-gc94f MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP devm_memremap_pages() is a facility that can create struct page entries for any arbitrary range and give drivers the ability to subvert core aspects of page management. Specifically the facility is tightly integrated with the kernel's memory hotplug functionality. It injects an altmap argument deep into the architecture specific vmemmap implementation to allow allocating from specific reserved pages, and it has Linux specific assumptions about page structure reference counting relative to get_user_pages() and get_user_pages_fast(). It was an oversight and a mistake that this was not marked EXPORT_SYMBOL_GPL from the outset. Again, devm_memremap_pagex() exposes and relies upon core kernel internal assumptions and will continue to evolve along with 'struct page', memory hotplug, and support for new memory types / topologies. Only an in-kernel GPL-only driver is expected to keep up with this ongoing evolution. This interface, and functionality derived from this interface, is not suitable for kernel-external drivers. Cc: Michal Hocko Cc: "Jérôme Glisse" Reviewed-by: Christoph Hellwig Signed-off-by: Dan Williams --- kernel/memremap.c | 2 +- tools/testing/nvdimm/test/iomap.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/memremap.c b/kernel/memremap.c index 5b8600d39931..f95c7833db6d 100644 --- a/kernel/memremap.c +++ b/kernel/memremap.c @@ -283,7 +283,7 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) pgmap_radix_release(res, pgoff); return ERR_PTR(error); } -EXPORT_SYMBOL(devm_memremap_pages); +EXPORT_SYMBOL_GPL(devm_memremap_pages); unsigned long vmem_altmap_offset(struct vmem_altmap *altmap) { diff --git a/tools/testing/nvdimm/test/iomap.c b/tools/testing/nvdimm/test/iomap.c index ff9d3a5825e1..ed18a0cbc0c8 100644 --- a/tools/testing/nvdimm/test/iomap.c +++ b/tools/testing/nvdimm/test/iomap.c @@ -113,7 +113,7 @@ void *__wrap_devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) return nfit_res->buf + offset - nfit_res->res.start; return devm_memremap_pages(dev, pgmap); } -EXPORT_SYMBOL(__wrap_devm_memremap_pages); +EXPORT_SYMBOL_GPL(__wrap_devm_memremap_pages); pfn_t __wrap_phys_to_pfn_t(phys_addr_t addr, unsigned long flags) { From patchwork Tue Sep 25 06:15:05 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 10613423 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 875F516B1 for ; Tue, 25 Sep 2018 06:26:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 795B72985A for ; Tue, 25 Sep 2018 06:26:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6D7B529868; Tue, 25 Sep 2018 06:26:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C14672985A for ; Tue, 25 Sep 2018 06:26:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C0EA48E0061; Tue, 25 Sep 2018 02:26:54 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BBBC98E0041; Tue, 25 Sep 2018 02:26:54 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AAAB48E0061; Tue, 25 Sep 2018 02:26:54 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f200.google.com (mail-pf1-f200.google.com [209.85.210.200]) by kanga.kvack.org (Postfix) with ESMTP id 6D1608E0041 for ; Tue, 25 Sep 2018 02:26:54 -0400 (EDT) Received: by mail-pf1-f200.google.com with SMTP id z18-v6so11823127pfe.19 for ; Mon, 24 Sep 2018 23:26:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:from :to:cc:date:message-id:in-reply-to:references:user-agent :mime-version:content-transfer-encoding; bh=7soyaKykspg7klNWnWa+2xvCZCgUshbW4CCfpCR01Zo=; b=aGMv75yir09eZjkUHVL7SjBfBPRIqSqtpUfVNVtPiy3iBQTm8LDLvkaDoBZV93SJav N10FWRStk2qdOPVXgNoHxFyk2mUTXQORcbGY4tJPT8z2pYe6ZpYKCDGgiPcgppm98O3u wX7bH3esh0mmEWvT41Ckr/Oz8tpOdPUtApUialpzd8E+qOHDO0H92e1eR63dvnl0WHAV 0/1fFKSpDPIEVr/11xt+WZqucsuOd+/UmFsUV87M3rSsMpvSvUVuaVN3nmDAfXRp0nca LC1BerM9+FVGDTX0WHluT9LPGVFBNK4MjlmKhH9XHpRqLuzECi4DkowsVx0ORzVnr0Us E6kA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.120 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: ABuFfojmPeDimc7m5/oPz3dJXx15n+wkxsV+6rSyPH3keGYrPRr2wL/T r57fECufK59Zg66BQ8BvJGiqXE8i+FRedHXYmk9c0cIo5iVUQpCGbNm/cnM8oDolCD2gOW9cUK0 wHRRh9CP9qkCuVmqUnwWeugGlwCAizi61bL8/z1walAiWXJ+r0xlzUz+7q8aBftrdpg== X-Received: by 2002:a17:902:4401:: with SMTP id k1-v6mr1937745pld.97.1537856814133; Mon, 24 Sep 2018 23:26:54 -0700 (PDT) X-Google-Smtp-Source: ACcGV637q6qeuBH/tBGtcOSO3oyDd539IPwMLW5iwegaiVEvRn0cQykfT3s77LYbeaHGAX3h15rF X-Received: by 2002:a17:902:4401:: with SMTP id k1-v6mr1937705pld.97.1537856813325; Mon, 24 Sep 2018 23:26:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537856813; cv=none; d=google.com; s=arc-20160816; b=rvCtAVj4VMSnOl9umMKfDStMYK6wYCkKL49H7LkmnJZ4kjCowucI9UIvTf4diKz6Ge ZvpmTqmuYW/f2V2dw5f1WGQURvm96yRzUDna+2W4zGkFmVkuwcOp/qCF5IWwM5cZmjzp Xi/C5qS3DYBAdcDLi28Ww/uMVlEgvI9S+jiQ1HgoIm8In6etswfSJ8l7WMFglvv6ps3u B3wQHCdbvVzZYr+hMMEKC+v8JYPq4BKocqMED4HPew4OqbIdwzq57Om+bLu4jaUg0g6I U0bAhEtUkchVg2nRV3UbozIih9V9E84mpdjMVPKjy7P6gUhZLuNhm1boStIT9YB8GfWy wwLA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:user-agent:references :in-reply-to:message-id:date:cc:to:from:subject; bh=7soyaKykspg7klNWnWa+2xvCZCgUshbW4CCfpCR01Zo=; b=gm/t4T/BfI2yP6srvw4KWrFmgQRR1TVnUCNsMzF1c47L2lFifImHt5zcPadA3OcTLK OX2NYIsfXpApUfKmQ9f8At3eRlWGnV1xurcm/1fX06QZK+nPpqH2bYUxTrlrU7LLi6HH 7BGWeA7Otah0tknPx1PhHmivNJk+qVnmHHl1ZP059hg15xG2D+te7QBkUll338w20089 YSkCtTxjus2gxd53ZG2/YdcGemLjhcdGaJ9edI1NsM4RtjXjkx7hHsX6o9a/WQHvQFB0 UvlTqeJZDqgE5uMm8Pi+TtoNy3TdJG4D7I5tYBlqsCD7T8yPEbFGjyJUvj0nPfAaBy+r odGg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.120 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga04.intel.com (mga04.intel.com. [192.55.52.120]) by mx.google.com with ESMTPS id m20-v6si1575426pgk.579.2018.09.24.23.26.53 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 24 Sep 2018 23:26:53 -0700 (PDT) Received-SPF: pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.120 as permitted sender) client-ip=192.55.52.120; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.120 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 24 Sep 2018 23:26:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,301,1534834800"; d="scan'208";a="235679576" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by orsmga004.jf.intel.com with ESMTP; 24 Sep 2018 23:26:52 -0700 Subject: [PATCH v6 2/7] mm, devm_memremap_pages: Kill mapping "System RAM" support From: Dan Williams To: akpm@linux-foundation.org Cc: =?utf-8?b?SsOpcsO0bWU=?= Glisse , Christoph Hellwig , Logan Gunthorpe , alexander.h.duyck@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Mon, 24 Sep 2018 23:15:05 -0700 Message-ID: <153785610512.283091.14575479268042256056.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <153785609460.283091.17422092801700439095.stgit@dwillia2-desk3.amr.corp.intel.com> References: <153785609460.283091.17422092801700439095.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-2-gc94f MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Given the fact that devm_memremap_pages() requires a percpu_ref that is torn down by devm_memremap_pages_release() the current support for mapping RAM is broken. Support for remapping "System RAM" has been broken since the beginning and there is no existing user of this this code path, so just kill the support and make it an explicit error. This cleanup also simplifies a follow-on patch to fix the error path when setting a devm release action for devm_memremap_pages_release() fails. Reviewed-by: "Jérôme Glisse" Reviewed-by: Christoph Hellwig Reviewed-by: Logan Gunthorpe Signed-off-by: Dan Williams --- kernel/memremap.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/kernel/memremap.c b/kernel/memremap.c index f95c7833db6d..92e838127767 100644 --- a/kernel/memremap.c +++ b/kernel/memremap.c @@ -202,15 +202,12 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) is_ram = region_intersects(align_start, align_size, IORESOURCE_SYSTEM_RAM, IORES_DESC_NONE); - if (is_ram == REGION_MIXED) { - WARN_ONCE(1, "%s attempted on mixed region %pr\n", - __func__, res); + if (is_ram != REGION_DISJOINT) { + WARN_ONCE(1, "%s attempted on %s region %pr\n", __func__, + is_ram == REGION_MIXED ? "mixed" : "ram", res); return ERR_PTR(-ENXIO); } - if (is_ram == REGION_INTERSECTS) - return __va(res->start); - if (!pgmap->ref) return ERR_PTR(-EINVAL); From patchwork Tue Sep 25 06:15:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 10613433 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 71DF6913 for ; Tue, 25 Sep 2018 06:27:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 62E5729867 for ; Tue, 25 Sep 2018 06:27:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 56E5F2986A; Tue, 25 Sep 2018 06:27:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8272029867 for ; Tue, 25 Sep 2018 06:27:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6DBCF8E0066; Tue, 25 Sep 2018 02:27:32 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 68AB68E0041; Tue, 25 Sep 2018 02:27:32 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 552AE8E0066; Tue, 25 Sep 2018 02:27:32 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) by kanga.kvack.org (Postfix) with ESMTP id 13AD78E0041 for ; Tue, 25 Sep 2018 02:27:32 -0400 (EDT) Received: by mail-pg1-f199.google.com with SMTP id 191-v6so9197044pgb.23 for ; Mon, 24 Sep 2018 23:27:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:from :to:cc:date:message-id:in-reply-to:references:user-agent :mime-version:content-transfer-encoding; bh=+qYDlAViB6ew4mBPTGF6CTWETQADLgu4zS/+ho4yrKQ=; b=Xn2GVVzTcbfGf0ueWmcDL5hCdpt47ewJ0+eCgfJyacvLlAJ9r38ddC+M794Z+xXXp0 NcYoeWDLcR5gnQKJem2CR+ay1fZfCwitpSDKoa8YUgPORr82bXKgeaa4vZj3l1xj71Fu l7C+6O53n8sjGHvNMZigR0ICIUhr7xXaNP4V61Cz8Bqpq1JR/iA81CHcJ+zNP2WykwuV NTV2+vWLzMRwC0tDsgkHzw67gwZ/SZo5dYEqASAF/KTsK0E5+vlI9BvKAmFkGJIyIQyb 8dz4dPrcGt2CJxfkvBUKwOkosDMdQd9+JzJ+HTyg3qttl0U2SB8AHjs0SCg0hUKq35sx 4UGg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: ABuFfoh681wL+uOrPkhL4JLRAwo1UBeUtsbJc0ilJnxLq5OkXW95yy7b LJz37ZGNNmOJnujXq5vrW3Fa465KGM8XRfmbWgFPE73TZT2TxoOQZHQkfEwqERoJ4YTcj+VYHNY LeKf93AagmLdqM04Nd2SQY9cvsN4xaLM6tdKaBXNakKbRSsBObg1aOjv7JVUN5a9iWg== X-Received: by 2002:a17:902:704c:: with SMTP id h12-v6mr1984276plt.237.1537856851722; Mon, 24 Sep 2018 23:27:31 -0700 (PDT) X-Google-Smtp-Source: ACcGV60P9fZAmwyofikKqsrJKKUFSKE7sOPsteP6RjjI/urfK6KA9JNwZxAejQCnd0GCXqRe/rXR X-Received: by 2002:a17:902:704c:: with SMTP id h12-v6mr1984218plt.237.1537856850591; Mon, 24 Sep 2018 23:27:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537856850; cv=none; d=google.com; s=arc-20160816; b=Rzn2avcdZE6GqSv3dwr6cfZXo9tkGeTP5wvruzPV6/qcXm57E094XJffJ8YOO5qbdd Vv5z4sQJi8Zw+EVZXZn3DwLbttWViuxr4SHEXXeQrqk5nMXGQx+W0+jiP3GksucCVzCN SFZbQW/dvewhDde+l3G2y2naEZisYupiM7RR5xebp35A2Cd94a3h4Mhz3KZ8H3nADRjz 1IsB3KV4Duzxc2Dw7pj+v1gqB6lDZNWToU2PBMo746mCspX0cJo3ILPgLWvB3kCW8h01 DyfaA8lsk70qnxbhz1EzcbYaQ8A1eGHzNXRHy5MDOQ8hEAIs7Mwbe1uzNdpMZqlDZFUT ZnvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:user-agent:references :in-reply-to:message-id:date:cc:to:from:subject; bh=+qYDlAViB6ew4mBPTGF6CTWETQADLgu4zS/+ho4yrKQ=; b=pYqwrUyA4ZFDgMJ4O4XZhPWgxXhIV3x82iWruPElAgkqRt/EoztwyfU+knRnkwQrgT Fzbps3ywX8j3a7seLDBpnTPCf8UkU8QOB5kRcWSL5lIJ+xzYgEf0x1k1bR3iDL1MvPzp /w3h25c+m0qIxF9JC1HYWvi2wqgx1Mqp5Q5VI0ySF9RPcybqnTRvVT/xpWuDcD+vqnTd FVbeCXFJUgbxNtGntcOpVq3nFviq8BovLnMWp1PtRc3LhET4SLhYPtrOxru0osADAGXM 4LU59mIDaKOJGCkUzF86vWfX/PlYiCWe8QDwlsszqTw0NeAhJONu0RAXgH6u77V37K/g FCfw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga14.intel.com (mga14.intel.com. [192.55.52.115]) by mx.google.com with ESMTPS id t2-v6si1566256pge.64.2018.09.24.23.27.30 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 24 Sep 2018 23:27:30 -0700 (PDT) Received-SPF: pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.115 as permitted sender) client-ip=192.55.52.115; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 24 Sep 2018 23:27:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,301,1534834800"; d="scan'208";a="92990656" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by fmsmga001.fm.intel.com with ESMTP; 24 Sep 2018 23:26:57 -0700 Subject: [PATCH v6 3/7] mm, devm_memremap_pages: Fix shutdown handling From: Dan Williams To: akpm@linux-foundation.org Cc: stable@vger.kernel.org, =?utf-8?b?SsOpcsO0bWU=?= Glisse , Logan Gunthorpe , Logan Gunthorpe , Christoph Hellwig , alexander.h.duyck@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Mon, 24 Sep 2018 23:15:10 -0700 Message-ID: <153785611023.283091.14768029396517384268.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <153785609460.283091.17422092801700439095.stgit@dwillia2-desk3.amr.corp.intel.com> References: <153785609460.283091.17422092801700439095.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-2-gc94f MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The last step before devm_memremap_pages() returns success is to allocate a release action, devm_memremap_pages_release(), to tear the entire setup down. However, the result from devm_add_action() is not checked. Checking the error from devm_add_action() is not enough. The api currently relies on the fact that the percpu_ref it is using is killed by the time the devm_memremap_pages_release() is run. Rather than continue this awkward situation, offload the responsibility of killing the percpu_ref to devm_memremap_pages_release() directly. This allows devm_memremap_pages() to do the right thing relative to init failures and shutdown. Without this change we could fail to register the teardown of devm_memremap_pages(). The likelihood of hitting this failure is tiny as small memory allocations almost always succeed. However, the impact of the failure is large given any future reconfiguration, or disable/enable, of an nvdimm namespace will fail forever as subsequent calls to devm_memremap_pages() will fail to setup the pgmap_radix since there will be stale entries for the physical address range. An argument could be made to require that the ->kill() operation be set in the @pgmap arg rather than passed in separately. However, it helps code readability, tracking the lifetime of a given instance, to be able to grep the kill routine directly at the devm_memremap_pages() call site. Cc: Fixes: e8d513483300 ("memremap: change devm_memremap_pages interface...") Reviewed-by: "Jérôme Glisse" Reported-by: Logan Gunthorpe Reviewed-by: Logan Gunthorpe Reviewed-by: Christoph Hellwig Signed-off-by: Dan Williams --- drivers/dax/pmem.c | 14 +++----------- drivers/nvdimm/pmem.c | 13 +++++-------- include/linux/memremap.h | 2 ++ kernel/memremap.c | 31 +++++++++++++++---------------- tools/testing/nvdimm/test/iomap.c | 15 ++++++++++++++- 5 files changed, 39 insertions(+), 36 deletions(-) diff --git a/drivers/dax/pmem.c b/drivers/dax/pmem.c index 99e2aace8078..2c1f459c0c63 100644 --- a/drivers/dax/pmem.c +++ b/drivers/dax/pmem.c @@ -48,9 +48,8 @@ static void dax_pmem_percpu_exit(void *data) percpu_ref_exit(ref); } -static void dax_pmem_percpu_kill(void *data) +static void dax_pmem_percpu_kill(struct percpu_ref *ref) { - struct percpu_ref *ref = data; struct dax_pmem *dax_pmem = to_dax_pmem(ref); dev_dbg(dax_pmem->dev, "trace\n"); @@ -112,17 +111,10 @@ static int dax_pmem_probe(struct device *dev) } dax_pmem->pgmap.ref = &dax_pmem->ref; + dax_pmem->pgmap.kill = dax_pmem_percpu_kill; addr = devm_memremap_pages(dev, &dax_pmem->pgmap); - if (IS_ERR(addr)) { - devm_remove_action(dev, dax_pmem_percpu_exit, &dax_pmem->ref); - percpu_ref_exit(&dax_pmem->ref); + if (IS_ERR(addr)) return PTR_ERR(addr); - } - - rc = devm_add_action_or_reset(dev, dax_pmem_percpu_kill, - &dax_pmem->ref); - if (rc) - return rc; /* adjust the dax_region resource to the start of data */ memcpy(&res, &dax_pmem->pgmap.res, sizeof(res)); diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index 6071e2942053..52f72ad8fe1e 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -309,8 +309,11 @@ static void pmem_release_queue(void *q) blk_cleanup_queue(q); } -static void pmem_freeze_queue(void *q) +static void pmem_freeze_queue(struct percpu_ref *ref) { + struct request_queue *q; + + q = container_of(ref, typeof(*q), q_usage_counter); blk_freeze_queue_start(q); } @@ -402,6 +405,7 @@ static int pmem_attach_disk(struct device *dev, pmem->pfn_flags = PFN_DEV; pmem->pgmap.ref = &q->q_usage_counter; + pmem->pgmap.kill = pmem_freeze_queue; if (is_nd_pfn(dev)) { if (setup_pagemap_fsdax(dev, &pmem->pgmap)) return -ENOMEM; @@ -425,13 +429,6 @@ static int pmem_attach_disk(struct device *dev, addr = devm_memremap(dev, pmem->phys_addr, pmem->size, ARCH_MEMREMAP_PMEM); - /* - * At release time the queue must be frozen before - * devm_memremap_pages is unwound - */ - if (devm_add_action_or_reset(dev, pmem_freeze_queue, q)) - return -ENOMEM; - if (IS_ERR(addr)) return PTR_ERR(addr); pmem->virt_addr = addr; diff --git a/include/linux/memremap.h b/include/linux/memremap.h index f91f9e763557..a84572cdc438 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -106,6 +106,7 @@ typedef void (*dev_page_free_t)(struct page *page, void *data); * @altmap: pre-allocated/reserved memory for vmemmap allocations * @res: physical address range covered by @ref * @ref: reference count that pins the devm_memremap_pages() mapping + * @kill: callback to transition @ref to the dead state * @dev: host device of the mapping for debug * @data: private data pointer for page_free() * @type: memory type: see MEMORY_* in memory_hotplug.h @@ -117,6 +118,7 @@ struct dev_pagemap { bool altmap_valid; struct resource res; struct percpu_ref *ref; + void (*kill)(struct percpu_ref *ref); struct device *dev; void *data; enum memory_type type; diff --git a/kernel/memremap.c b/kernel/memremap.c index 92e838127767..fe2a9cd0b9c1 100644 --- a/kernel/memremap.c +++ b/kernel/memremap.c @@ -122,14 +122,10 @@ static void devm_memremap_pages_release(void *data) resource_size_t align_start, align_size; unsigned long pfn; + pgmap->kill(pgmap->ref); for_each_device_pfn(pfn, pgmap) put_page(pfn_to_page(pfn)); - if (percpu_ref_tryget_live(pgmap->ref)) { - dev_WARN(dev, "%s: page mapping is still live!\n", __func__); - percpu_ref_put(pgmap->ref); - } - /* pages are dead and unused, undo the arch mapping */ align_start = res->start & ~(SECTION_SIZE - 1); align_size = ALIGN(res->start + resource_size(res), SECTION_SIZE) @@ -150,7 +146,7 @@ static void devm_memremap_pages_release(void *data) /** * devm_memremap_pages - remap and provide memmap backing for the given resource * @dev: hosting device for @res - * @pgmap: pointer to a struct dev_pgmap + * @pgmap: pointer to a struct dev_pagemap * * Notes: * 1/ At a minimum the res, ref and type members of @pgmap must be initialized @@ -159,11 +155,8 @@ static void devm_memremap_pages_release(void *data) * 2/ The altmap field may optionally be initialized, in which case altmap_valid * must be set to true * - * 3/ pgmap.ref must be 'live' on entry and 'dead' before devm_memunmap_pages() - * time (or devm release event). The expected order of events is that ref has - * been through percpu_ref_kill() before devm_memremap_pages_release(). The - * wait for the completion of all references being dropped and - * percpu_ref_exit() must occur after devm_memremap_pages_release(). + * 3/ pgmap->ref must be 'live' on entry and will be killed at + * devm_memremap_pages_release() time, or if this routine fails. * * 4/ res is expected to be a host memory range that could feasibly be * treated as a "System RAM" range, i.e. not a device mmio range, but @@ -180,6 +173,9 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) int error, nid, is_ram; struct dev_pagemap *conflict_pgmap; + if (!pgmap->ref || !pgmap->kill) + return ERR_PTR(-EINVAL); + align_start = res->start & ~(SECTION_SIZE - 1); align_size = ALIGN(res->start + resource_size(res), SECTION_SIZE) - align_start; @@ -205,12 +201,10 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) if (is_ram != REGION_DISJOINT) { WARN_ONCE(1, "%s attempted on %s region %pr\n", __func__, is_ram == REGION_MIXED ? "mixed" : "ram", res); - return ERR_PTR(-ENXIO); + error = -ENXIO; + goto err_init; } - if (!pgmap->ref) - return ERR_PTR(-EINVAL); - pgmap->dev = dev; mutex_lock(&pgmap_lock); @@ -267,7 +261,10 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) percpu_ref_get(pgmap->ref); } - devm_add_action(dev, devm_memremap_pages_release, pgmap); + error = devm_add_action_or_reset(dev, devm_memremap_pages_release, + pgmap); + if (error) + return ERR_PTR(error); return __va(res->start); @@ -278,6 +275,8 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) err_pfn_remap: err_radix: pgmap_radix_release(res, pgoff); + err_init: + pgmap->kill(pgmap->ref); return ERR_PTR(error); } EXPORT_SYMBOL_GPL(devm_memremap_pages); diff --git a/tools/testing/nvdimm/test/iomap.c b/tools/testing/nvdimm/test/iomap.c index ed18a0cbc0c8..c6635fee27d8 100644 --- a/tools/testing/nvdimm/test/iomap.c +++ b/tools/testing/nvdimm/test/iomap.c @@ -104,13 +104,26 @@ void *__wrap_devm_memremap(struct device *dev, resource_size_t offset, } EXPORT_SYMBOL(__wrap_devm_memremap); +static void nfit_test_kill(void *_pgmap) +{ + struct dev_pagemap *pgmap = _pgmap; + + pgmap->kill(pgmap->ref); +} + void *__wrap_devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) { resource_size_t offset = pgmap->res.start; struct nfit_test_resource *nfit_res = get_nfit_res(offset); - if (nfit_res) + if (nfit_res) { + int rc; + + rc = devm_add_action_or_reset(dev, nfit_test_kill, pgmap); + if (rc) + return ERR_PTR(rc); return nfit_res->buf + offset - nfit_res->res.start; + } return devm_memremap_pages(dev, pgmap); } EXPORT_SYMBOL_GPL(__wrap_devm_memremap_pages); From patchwork Tue Sep 25 06:15:15 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 10613435 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3C0C114DA for ; Tue, 25 Sep 2018 06:30:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2617B29895 for ; Tue, 25 Sep 2018 06:30:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1A33A298A5; Tue, 25 Sep 2018 06:30:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9948929895 for ; Tue, 25 Sep 2018 06:30:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B93478E0067; Tue, 25 Sep 2018 02:30:36 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B1BFC8E0041; Tue, 25 Sep 2018 02:30:36 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9E5DE8E0067; Tue, 25 Sep 2018 02:30:36 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f200.google.com (mail-pg1-f200.google.com [209.85.215.200]) by kanga.kvack.org (Postfix) with ESMTP id 583D48E0041 for ; Tue, 25 Sep 2018 02:30:36 -0400 (EDT) Received: by mail-pg1-f200.google.com with SMTP id 11-v6so4302665pgd.1 for ; Mon, 24 Sep 2018 23:30:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:from :to:cc:date:message-id:in-reply-to:references:user-agent :mime-version:content-transfer-encoding; bh=gIKTe070nt33soZdk3FNeSDX9JCTHChbP/lTcEMh/Qc=; b=NXT+7fCaBY8wgGsGe+FoJdSLWplScXXHuNywR1+LBnWP8H+BxcYu/EOYIYLZ6T2bSq U0otJIrAerlgaqC7Nsk3x9zYqSuJPdtUfg90vqrKhUGonkblxN8PAkzou3kwA9pqhnVs rzK9SQXRi+I1mJsUlX8sI9LP9aSmnVMh8wGex/ydE+DS1IHFqRHwT7YYhAHYBp6s2meb R4G0hdHS3fsYiVYXjxmOns5xI5t8tb/SRC+mo4blRWqkIkjRdR66ekDfOdnTjuv5ycDN QmCTtnZRzNiblg5DSEmuJOH2AzzdoLtUra4Ztoj5BWPwPeNSd3Zq6aV/x06YoY1ziZEw oUIg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: ABuFfoiKAF88PX5kUBhMa6BaECOXYYmO76j1OGBNILboI+p/y2oxn+D2 cXZLUl0qGlcSqMJhoxKIVAoE5SBulQzijcjJ+lQfWbHJF/a7TYW8Mn7HCT7xe2Mf0QL4fSy4+r8 g1p0G+o61uWZw+ozhko2j4A9kkb4K1qOqd8zZ+JAl3lWjc39VachCzoB36wHR5s6lRg== X-Received: by 2002:a63:f51:: with SMTP id 17-v6mr1929471pgp.100.1537857035982; Mon, 24 Sep 2018 23:30:35 -0700 (PDT) X-Google-Smtp-Source: ACcGV62B8nUwLWl3d4/fsr8ToDsfcoCb9vwxAkwyU2nDFAT6dkup0Kzsx1iYScXBpEFhcTCxIqGu X-Received: by 2002:a63:f51:: with SMTP id 17-v6mr1929409pgp.100.1537857035053; Mon, 24 Sep 2018 23:30:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537857035; cv=none; d=google.com; s=arc-20160816; b=P5YT6nOSgd4mtXQYG6O3/stEX+L9XN530k8/YfUtvwu9tAnlQHHTbPqso3R0bB1cZY ar5cejmFu0p3ug7/LeYl/bNUFF4cmD7V40cr9IUUwZjGcNqEPCb0bFe9jIU3et32cxrW E4iw58F33629COGZmEo/wfyE6qKdhOetr2/+6XC1HNiqWffSTKhGC9i9TWnkZjUlsIs6 W6wpJiL0C1f0Rpu2G238RsH49NIo3Flq8hdq8RyTS8B1Vix+rH4nSGdeg4OQHp2SaHag Tsdz0GLaaIJvn+V3cAXfNg44AY/aVQ6mURnlkC7y5fCH9nvlla0c8W4DwX6QnqKiZoxT tkzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:user-agent:references :in-reply-to:message-id:date:cc:to:from:subject; bh=gIKTe070nt33soZdk3FNeSDX9JCTHChbP/lTcEMh/Qc=; b=cT09b6Ln3yvn4ehRAf2SureuIUGrrmFNDQEKZMUtutfS1m/YqCqIBxgcuAFrmVDVWj L1j3lUFlQw+fbCfeBYo/z2wtozNeOvPEREj8a9OrWKXv/WZLCqu0DNR5ad9+ZfpGS2n5 rxO6RuyHmA78SNSxz1EOtNRWtyK1fD5uQGT6OhvwINbkfJcYuEflZ2MUtE6Z3j388Ypb MyDbXVYm5sgq8/theXIC5wszr7RHgvuSGAj+M4SfntEDgZ+hMoZr/Z1RBtjF3lAulxK5 VMfnsrRf5LbbRpprIPpKiMthR81LOVtQ5M3hUc/g1Medq1/WCQtNLlzEu7ob8C4w/Spn GcOA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga14.intel.com (mga14.intel.com. [192.55.52.115]) by mx.google.com with ESMTPS id 69-v6si1530356pla.505.2018.09.24.23.30.34 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 24 Sep 2018 23:30:35 -0700 (PDT) Received-SPF: pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.115 as permitted sender) client-ip=192.55.52.115; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 24 Sep 2018 23:30:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,301,1534834800"; d="scan'208";a="89083355" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by fmsmga002.fm.intel.com with ESMTP; 24 Sep 2018 23:27:03 -0700 Subject: [PATCH v6 4/7] mm, devm_memremap_pages: Add MEMORY_DEVICE_PRIVATE support From: Dan Williams To: akpm@linux-foundation.org Cc: =?utf-8?b?SsOpcsO0bWU=?= Glisse , Christoph Hellwig , Logan Gunthorpe , Logan Gunthorpe , alexander.h.duyck@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Mon, 24 Sep 2018 23:15:15 -0700 Message-ID: <153785611587.283091.3545117308977274134.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <153785609460.283091.17422092801700439095.stgit@dwillia2-desk3.amr.corp.intel.com> References: <153785609460.283091.17422092801700439095.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-2-gc94f MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP In preparation for consolidating all ZONE_DEVICE enabling via devm_memremap_pages(), teach it how to handle the constraints of MEMORY_DEVICE_PRIVATE ranges. Reviewed-by: Jérôme Glisse [jglisse: call move_pfn_range_to_zone for MEMORY_DEVICE_PRIVATE] Acked-by: Christoph Hellwig Reported-by: Logan Gunthorpe Reviewed-by: Logan Gunthorpe Signed-off-by: Dan Williams --- kernel/memremap.c | 53 +++++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 41 insertions(+), 12 deletions(-) diff --git a/kernel/memremap.c b/kernel/memremap.c index fe2a9cd0b9c1..6e32fe36b460 100644 --- a/kernel/memremap.c +++ b/kernel/memremap.c @@ -132,9 +132,15 @@ static void devm_memremap_pages_release(void *data) - align_start; mem_hotplug_begin(); - arch_remove_memory(align_start, align_size, pgmap->altmap_valid ? - &pgmap->altmap : NULL); - kasan_remove_zero_shadow(__va(align_start), align_size); + if (pgmap->type == MEMORY_DEVICE_PRIVATE) { + pfn = align_start >> PAGE_SHIFT; + __remove_pages(page_zone(pfn_to_page(pfn)), pfn, + align_size >> PAGE_SHIFT, NULL); + } else { + arch_remove_memory(align_start, align_size, + pgmap->altmap_valid ? &pgmap->altmap : NULL); + kasan_remove_zero_shadow(__va(align_start), align_size); + } mem_hotplug_done(); untrack_pfn(NULL, PHYS_PFN(align_start), align_size); @@ -232,17 +238,40 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) goto err_pfn_remap; mem_hotplug_begin(); - error = kasan_add_zero_shadow(__va(align_start), align_size); - if (error) { - mem_hotplug_done(); - goto err_kasan; + + /* + * For device private memory we call add_pages() as we only need to + * allocate and initialize struct page for the device memory. More- + * over the device memory is un-accessible thus we do not want to + * create a linear mapping for the memory like arch_add_memory() + * would do. + * + * For all other device memory types, which are accessible by + * the CPU, we do want the linear mapping and thus use + * arch_add_memory(). + */ + if (pgmap->type == MEMORY_DEVICE_PRIVATE) { + error = add_pages(nid, align_start >> PAGE_SHIFT, + align_size >> PAGE_SHIFT, NULL, false); + } else { + error = kasan_add_zero_shadow(__va(align_start), align_size); + if (error) { + mem_hotplug_done(); + goto err_kasan; + } + + error = arch_add_memory(nid, align_start, align_size, altmap, + false); + } + + if (!error) { + struct zone *zone; + + zone = &NODE_DATA(nid)->node_zones[ZONE_DEVICE]; + move_pfn_range_to_zone(zone, align_start >> PAGE_SHIFT, + align_size >> PAGE_SHIFT, altmap); } - error = arch_add_memory(nid, align_start, align_size, altmap, false); - if (!error) - move_pfn_range_to_zone(&NODE_DATA(nid)->node_zones[ZONE_DEVICE], - align_start >> PAGE_SHIFT, - align_size >> PAGE_SHIFT, altmap); mem_hotplug_done(); if (error) goto err_add_memory; From patchwork Tue Sep 25 06:15:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 10613427 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 44C4C913 for ; Tue, 25 Sep 2018 06:27:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3603B29867 for ; Tue, 25 Sep 2018 06:27:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2986A2986A; Tue, 25 Sep 2018 06:27:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 11FA229867 for ; Tue, 25 Sep 2018 06:27:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 220BC8E0063; Tue, 25 Sep 2018 02:27:11 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1CFA78E0041; Tue, 25 Sep 2018 02:27:11 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0BED08E0063; Tue, 25 Sep 2018 02:27:11 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f197.google.com (mail-pg1-f197.google.com [209.85.215.197]) by kanga.kvack.org (Postfix) with ESMTP id C2B5D8E0041 for ; Tue, 25 Sep 2018 02:27:10 -0400 (EDT) Received: by mail-pg1-f197.google.com with SMTP id s15-v6so3014371pgv.9 for ; Mon, 24 Sep 2018 23:27:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:from :to:cc:date:message-id:in-reply-to:references:user-agent :mime-version:content-transfer-encoding; bh=hMnyW9GCzgM6Ll/Qz2r9Vn979b9NV/2raiJvbm0odoI=; b=mZO+lNJzIvr/EozXyJVHlf9DPhS7wPXsAEuXys63dWSO6R9cK6LmHq119s6C66Y/cy ErbMrRJwqxOn6H9i4Slo31jO9fSbncFDdq1EWH1ULI53AfWvauSu4CcrRY6euyqB4mru ThQvIwFCkUlbbJYsBx5t3dmE/DqQ4EUWiYHcxl5jpN7sEv5Jy7231Jq3tYQCZsmJq6LG We41tgj2S+6kv0AbvsNzY1bprqeQdTqXGlkJjxVu4OZMbHjVVYHhQgS+78tJhBb8ZEco dcbUh8VG6RIrOrdP3XXZ4IQghoXr/yjJge3vqDhAyQACzpW+mCdsHygKgesETq8eB3KU 0Z7Q== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 134.134.136.31 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: ABuFfogsbQERDpwR+Y19knWpcHDeo7fxe5JF7dSBWxQ+PAT2p9qSRSKJ eVadTuKXy70knePdqpWb7vSf8gjBzwSshq4epIx4yz28LTkje0xHEYZMHE9l9aWWxRBRzF/hm1t MIG2ABG6CfG3a1lXrz8+1Q2a7hiqRKo1ddoswreczjgUHEIp5gQjoxcvqQo7MUa38Fg== X-Received: by 2002:a63:e5e:: with SMTP id 30-v6mr1904326pgo.320.1537856830445; Mon, 24 Sep 2018 23:27:10 -0700 (PDT) X-Google-Smtp-Source: ACcGV63t2ee52j3xwx605Cmk6jwiLb36nKwI7PN3mLsCIhn8VeWzNG+1x0xZFUeIo0nzOTQXcOvd X-Received: by 2002:a63:e5e:: with SMTP id 30-v6mr1904266pgo.320.1537856829413; Mon, 24 Sep 2018 23:27:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537856829; cv=none; d=google.com; s=arc-20160816; b=gR6LnplfZyIIUZvq60RuPvdDfD6POv5pdz7FosegItY5G2Doh3eiQ+gxpnJUrQN/ni qPeLx19WQUaZPPwK6sdjXFU7B/OoAIpaWkBZHGt+DbbILl+JUg+cdJn+ePFZLVBpfqnL 6BwtOJ5KoF/NJ9RaQpYLYwhR+Ov2DJ8eXRhK/3v/Z5X6/XtY/Gi7Cb0Pfkvh4zGwp9zR dK9reVR0kyL4wrNcypTwLRzCcDdCuwqDNqzZDTDTaZoYwQHZYiHJOK4tFF4hkgcznEd5 I6vkerVLGKTJnWOk/AA0u2sIKG6T0c1xiacFfbWg8KKh/xGqmxTilwiYm2D719qFOxSj 5ZUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:user-agent:references :in-reply-to:message-id:date:cc:to:from:subject; bh=hMnyW9GCzgM6Ll/Qz2r9Vn979b9NV/2raiJvbm0odoI=; b=fAiCDTE+tCDkRX78nd0LAsGeYe9WuGbZmxrbxuoqABfPPz11IUNPnNDs5NL80ieMR0 RTgW/a09G1pFEUcpl2TtB5mJSkDXreL0vMsTpn9bYhgEsvW6yGqfnQF3wKUscBqkDZT4 Z0sbqxiMXV5RlINq1CGI6ysvHexsq2uoJZWwCxFrgn57z0AMUrsj4Kr23Yhb5SVdUZMt yNDF8sSVy6FIDyGF9ZclthzEfh3ejMDeavE6Wpix/gBdf7a6wKiErtUAkakT/7V+QVTj c+k/xJzdLqOA72wzxronid8zBSFJlsLWN0umgbF3EBA4cRDGJNxiU18Vamnn3t6lYTlg Lo8w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 134.134.136.31 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga06.intel.com (mga06.intel.com. [134.134.136.31]) by mx.google.com with ESMTPS id s7-v6si1485627pgj.480.2018.09.24.23.27.09 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 24 Sep 2018 23:27:09 -0700 (PDT) Received-SPF: pass (google.com: domain of dan.j.williams@intel.com designates 134.134.136.31 as permitted sender) client-ip=134.134.136.31; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 134.134.136.31 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 24 Sep 2018 23:27:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,301,1534834800"; d="scan'208";a="265460320" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by fmsmga005.fm.intel.com with ESMTP; 24 Sep 2018 23:27:08 -0700 Subject: [PATCH v6 5/7] mm, hmm: Use devm semantics for hmm_devmem_{add, remove} From: Dan Williams To: akpm@linux-foundation.org Cc: Christoph Hellwig , =?utf-8?b?SsOpcsO0bWU=?= Glisse , =?utf-8?b?SsOpcsO0?= =?utf-8?b?bWU=?= Glisse , Logan Gunthorpe , alexander.h.duyck@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Mon, 24 Sep 2018 23:15:21 -0700 Message-ID: <153785612119.283091.17043843780612326673.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <153785609460.283091.17422092801700439095.stgit@dwillia2-desk3.amr.corp.intel.com> References: <153785609460.283091.17422092801700439095.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-2-gc94f MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP devm semantics arrange for resources to be torn down when device-driver-probe fails or when device-driver-release completes. Similar to devm_memremap_pages() there is no need to support an explicit remove operation when the users properly adhere to devm semantics. Note that devm_kzalloc() automatically handles allocating node-local memory. Reviewed-by: Christoph Hellwig Reviewed-by: Jérôme Glisse Cc: "Jérôme Glisse" Cc: Logan Gunthorpe Signed-off-by: Dan Williams --- include/linux/hmm.h | 4 -- mm/hmm.c | 127 ++++++++++----------------------------------------- 2 files changed, 25 insertions(+), 106 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 4c92e3ba3e16..5ec8635f602c 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -499,8 +499,7 @@ struct hmm_devmem { * enough and allocate struct page for it. * * The device driver can wrap the hmm_devmem struct inside a private device - * driver struct. The device driver must call hmm_devmem_remove() before the - * device goes away and before freeing the hmm_devmem struct memory. + * driver struct. */ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, struct device *device, @@ -508,7 +507,6 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops, struct device *device, struct resource *res); -void hmm_devmem_remove(struct hmm_devmem *devmem); /* * hmm_devmem_page_set_drvdata - set per-page driver data field diff --git a/mm/hmm.c b/mm/hmm.c index c968e49f7a0c..ec1d9eccf176 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -939,7 +939,6 @@ static void hmm_devmem_ref_exit(void *data) devmem = container_of(ref, struct hmm_devmem, ref); percpu_ref_exit(ref); - devm_remove_action(devmem->device, &hmm_devmem_ref_exit, data); } static void hmm_devmem_ref_kill(void *data) @@ -950,7 +949,6 @@ static void hmm_devmem_ref_kill(void *data) devmem = container_of(ref, struct hmm_devmem, ref); percpu_ref_kill(ref); wait_for_completion(&devmem->completion); - devm_remove_action(devmem->device, &hmm_devmem_ref_kill, data); } static int hmm_devmem_fault(struct vm_area_struct *vma, @@ -988,7 +986,7 @@ static void hmm_devmem_radix_release(struct resource *resource) mutex_unlock(&hmm_devmem_lock); } -static void hmm_devmem_release(struct device *dev, void *data) +static void hmm_devmem_release(void *data) { struct hmm_devmem *devmem = data; struct resource *resource = devmem->resource; @@ -996,11 +994,6 @@ static void hmm_devmem_release(struct device *dev, void *data) struct zone *zone; struct page *page; - if (percpu_ref_tryget_live(&devmem->ref)) { - dev_WARN(dev, "%s: page mapping is still live!\n", __func__); - percpu_ref_put(&devmem->ref); - } - /* pages are dead and unused, undo the arch mapping */ start_pfn = (resource->start & ~(PA_SECTION_SIZE - 1)) >> PAGE_SHIFT; npages = ALIGN(resource_size(resource), PA_SECTION_SIZE) >> PAGE_SHIFT; @@ -1124,19 +1117,6 @@ static int hmm_devmem_pages_create(struct hmm_devmem *devmem) return ret; } -static int hmm_devmem_match(struct device *dev, void *data, void *match_data) -{ - struct hmm_devmem *devmem = data; - - return devmem->resource == match_data; -} - -static void hmm_devmem_pages_remove(struct hmm_devmem *devmem) -{ - devres_release(devmem->device, &hmm_devmem_release, - &hmm_devmem_match, devmem->resource); -} - /* * hmm_devmem_add() - hotplug ZONE_DEVICE memory for device memory * @@ -1164,8 +1144,7 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, dev_pagemap_get_ops(); - devmem = devres_alloc_node(&hmm_devmem_release, sizeof(*devmem), - GFP_KERNEL, dev_to_node(device)); + devmem = devm_kzalloc(device, sizeof(*devmem), GFP_KERNEL); if (!devmem) return ERR_PTR(-ENOMEM); @@ -1179,11 +1158,11 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, ret = percpu_ref_init(&devmem->ref, &hmm_devmem_ref_release, 0, GFP_KERNEL); if (ret) - goto error_percpu_ref; + return ERR_PTR(ret); - ret = devm_add_action(device, hmm_devmem_ref_exit, &devmem->ref); + ret = devm_add_action_or_reset(device, hmm_devmem_ref_exit, &devmem->ref); if (ret) - goto error_devm_add_action; + return ERR_PTR(ret); size = ALIGN(size, PA_SECTION_SIZE); addr = min((unsigned long)iomem_resource.end, @@ -1203,16 +1182,12 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, devmem->resource = devm_request_mem_region(device, addr, size, dev_name(device)); - if (!devmem->resource) { - ret = -ENOMEM; - goto error_no_resource; - } + if (!devmem->resource) + return ERR_PTR(-ENOMEM); break; } - if (!devmem->resource) { - ret = -ERANGE; - goto error_no_resource; - } + if (!devmem->resource) + return ERR_PTR(-ERANGE); devmem->resource->desc = IORES_DESC_DEVICE_PRIVATE_MEMORY; devmem->pfn_first = devmem->resource->start >> PAGE_SHIFT; @@ -1221,28 +1196,13 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, ret = hmm_devmem_pages_create(devmem); if (ret) - goto error_pages; - - devres_add(device, devmem); + return ERR_PTR(ret); - ret = devm_add_action(device, hmm_devmem_ref_kill, &devmem->ref); - if (ret) { - hmm_devmem_remove(devmem); + ret = devm_add_action_or_reset(device, hmm_devmem_release, devmem); + if (ret) return ERR_PTR(ret); - } return devmem; - -error_pages: - devm_release_mem_region(device, devmem->resource->start, - resource_size(devmem->resource)); -error_no_resource: -error_devm_add_action: - hmm_devmem_ref_kill(&devmem->ref); - hmm_devmem_ref_exit(&devmem->ref); -error_percpu_ref: - devres_free(devmem); - return ERR_PTR(ret); } EXPORT_SYMBOL(hmm_devmem_add); @@ -1258,8 +1218,7 @@ struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops, dev_pagemap_get_ops(); - devmem = devres_alloc_node(&hmm_devmem_release, sizeof(*devmem), - GFP_KERNEL, dev_to_node(device)); + devmem = devm_kzalloc(device, sizeof(*devmem), GFP_KERNEL); if (!devmem) return ERR_PTR(-ENOMEM); @@ -1273,12 +1232,12 @@ struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops, ret = percpu_ref_init(&devmem->ref, &hmm_devmem_ref_release, 0, GFP_KERNEL); if (ret) - goto error_percpu_ref; + return ERR_PTR(ret); - ret = devm_add_action(device, hmm_devmem_ref_exit, &devmem->ref); + ret = devm_add_action_or_reset(device, hmm_devmem_ref_exit, + &devmem->ref); if (ret) - goto error_devm_add_action; - + return ERR_PTR(ret); devmem->pfn_first = devmem->resource->start >> PAGE_SHIFT; devmem->pfn_last = devmem->pfn_first + @@ -1286,59 +1245,21 @@ struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops, ret = hmm_devmem_pages_create(devmem); if (ret) - goto error_devm_add_action; + return ERR_PTR(ret); - devres_add(device, devmem); + ret = devm_add_action_or_reset(device, hmm_devmem_release, devmem); + if (ret) + return ERR_PTR(ret); - ret = devm_add_action(device, hmm_devmem_ref_kill, &devmem->ref); - if (ret) { - hmm_devmem_remove(devmem); + ret = devm_add_action_or_reset(device, hmm_devmem_ref_kill, + &devmem->ref); + if (ret) return ERR_PTR(ret); - } return devmem; - -error_devm_add_action: - hmm_devmem_ref_kill(&devmem->ref); - hmm_devmem_ref_exit(&devmem->ref); -error_percpu_ref: - devres_free(devmem); - return ERR_PTR(ret); } EXPORT_SYMBOL(hmm_devmem_add_resource); -/* - * hmm_devmem_remove() - remove device memory (kill and free ZONE_DEVICE) - * - * @devmem: hmm_devmem struct use to track and manage the ZONE_DEVICE memory - * - * This will hot-unplug memory that was hotplugged by hmm_devmem_add on behalf - * of the device driver. It will free struct page and remove the resource that - * reserved the physical address range for this device memory. - */ -void hmm_devmem_remove(struct hmm_devmem *devmem) -{ - resource_size_t start, size; - struct device *device; - bool cdm = false; - - if (!devmem) - return; - - device = devmem->device; - start = devmem->resource->start; - size = resource_size(devmem->resource); - - cdm = devmem->resource->desc == IORES_DESC_DEVICE_PUBLIC_MEMORY; - hmm_devmem_ref_kill(&devmem->ref); - hmm_devmem_ref_exit(&devmem->ref); - hmm_devmem_pages_remove(devmem); - - if (!cdm) - devm_release_mem_region(device, start, size); -} -EXPORT_SYMBOL(hmm_devmem_remove); - /* * A device driver that wants to handle multiple devices memory through a * single fake device can use hmm_device to do so. This is purely a helper From patchwork Tue Sep 25 06:15:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 10613429 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 83E0416B1 for ; Tue, 25 Sep 2018 06:27:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 75BC029867 for ; Tue, 25 Sep 2018 06:27:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 69FB32986A; Tue, 25 Sep 2018 06:27:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A56B129867 for ; Tue, 25 Sep 2018 06:27:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9C07C8E0064; Tue, 25 Sep 2018 02:27:18 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 971908E0041; Tue, 25 Sep 2018 02:27:18 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 860188E0064; Tue, 25 Sep 2018 02:27:18 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by kanga.kvack.org (Postfix) with ESMTP id 4961B8E0041 for ; Tue, 25 Sep 2018 02:27:18 -0400 (EDT) Received: by mail-pf1-f198.google.com with SMTP id e15-v6so11840642pfi.5 for ; Mon, 24 Sep 2018 23:27:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:from :to:cc:date:message-id:in-reply-to:references:user-agent :mime-version:content-transfer-encoding; bh=p/qeas4SorSY8tp3UWujDhS6AG/8eU3Xe8Ekraw9CBY=; b=NzjWjGcBOFw8eoqF6RFEE+3/gj/noFfE+pg9/ao6cBukfsDlHUItdCX+F/rtD9FtC9 kb4OJfzCt5ErJlosOzoo1QoFGXJBOYUr5dUIB5uGWMeAJFjOTDEk3hdN3EjdTtJx7xmd x4wD54Zh3Q0Vr8/ZEQG39QAr1/9O2LApZzIPmJYyI67zyMw52qSFVwP7R23D38xDriwD CSzm5TlqW8sb2gMH6BfdmSThPjhneJn8U2NagSoLePpBj5bt2EArxg0tXLTsc/7D6cDT dAmDTOQM2PCjuVjPXmmNueRcJJ9Eip5iSiChQOuUtDL8JlPFBZid4TV5RAIJBduK0dJx O2hA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.151 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: ABuFfohLObgFJUaBNo16BI6Piwc5071wu+IskSRN5fKZ/OH9wLZz1J+V 7WxPXQNZL1GYyTapT3czlhELOYHKa24tr6awCQ07GkbP97BKH1DDrC1Df3LzpeRthMeUrZ1cH9C XaKMRBL5SmIN+iKrW8fP1ghw3xCZJ5bdKdhHMipFazF73fR6dqeVg3zrvNpvEWLgJJQ== X-Received: by 2002:a63:dd49:: with SMTP id g9-v6mr1857488pgj.356.1537856837957; Mon, 24 Sep 2018 23:27:17 -0700 (PDT) X-Google-Smtp-Source: ACcGV61qVIA0F4KzX253Otxn2OKTr8PBa94hzAV0J87asiOMkB/HrQMzb159w/ogDaiIcIAnP44/ X-Received: by 2002:a63:dd49:: with SMTP id g9-v6mr1857420pgj.356.1537856836940; Mon, 24 Sep 2018 23:27:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537856836; cv=none; d=google.com; s=arc-20160816; b=E/AVDg1D8bUV5vhAHi/pzA6Ap7NIyLUUd/xU/f790vyqDmXeelMcWjFMsfe9vCmaVm fmGkPNCanRnToh8KlDAo7kal4FtazMGu5DU9aKByYafj4o1NqlOKHTSCFudE4771yoDW I9QelYuRRgiUhpkjtGSYgDX/kQqTnTkeQNgBPgQp2n1J5jiLbxHT/5bzuhCLzhakR8Fm y0BeWeGtlH47VyhqdqgQxLE+snIdoti0ajVtPUbAtNK9uqwJa24ZqeP0dfxmIplZlrUo xLYJKmY+yAVu66tW0UvySl7iCivRCFQJemioM0zJo6uCa0LzNPz8P+zvbf34W5l92Nlo FTKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:user-agent:references :in-reply-to:message-id:date:cc:to:from:subject; bh=p/qeas4SorSY8tp3UWujDhS6AG/8eU3Xe8Ekraw9CBY=; b=U+1/Pk0+mM5+tJ61Wim9MmV3s/QY6rQT9L+acWrPbbR+40IHQrsdp8lhb0AxmNeH10 Usfhj/NjYvUzZiQ7vXehCHdI710pilQyg0JbT8akoPo69i7GuC1EuiqAbeDYfSSQf9t2 /nkdoLEevrEpHySts2NEPhb1BAtmParVT7dF9UUDYBSW7TYVYWrVzNRf/A5helJ3C8To KenheZcEIuPinO3bmu3ico+KMYCVFLoOYaTN34oF4X1bRQAh1qf5IRnTxrmPvTEuV7Ni A9uhc3Crs1Cy3jsnW+GYCCI3ER/vuwWIziKxQXmezt/P9vhDN5823N1Vd2pFg33OcFZF OTbQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.151 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga17.intel.com (mga17.intel.com. [192.55.52.151]) by mx.google.com with ESMTPS id o2-v6si1521964pga.521.2018.09.24.23.27.16 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 24 Sep 2018 23:27:16 -0700 (PDT) Received-SPF: pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.151 as permitted sender) client-ip=192.55.52.151; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.151 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 24 Sep 2018 23:27:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,301,1534834800"; d="scan'208";a="93453796" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by orsmga001.jf.intel.com with ESMTP; 24 Sep 2018 23:27:13 -0700 Subject: [PATCH v6 6/7] mm, hmm: Replace hmm_devmem_pages_create() with devm_memremap_pages() From: Dan Williams To: akpm@linux-foundation.org Cc: Christoph Hellwig , =?utf-8?b?SsOpcsO0bWU=?= Glisse , Balbir Singh , Logan Gunthorpe , alexander.h.duyck@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Mon, 24 Sep 2018 23:15:26 -0700 Message-ID: <153785612652.283091.9110624682082656512.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <153785609460.283091.17422092801700439095.stgit@dwillia2-desk3.amr.corp.intel.com> References: <153785609460.283091.17422092801700439095.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-2-gc94f MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Commit e8d513483300 "memremap: change devm_memremap_pages interface to use struct dev_pagemap" refactored devm_memremap_pages() to allow a dev_pagemap instance to be supplied. Passing in a dev_pagemap interface simplifies the design of pgmap type drivers in that they can rely on container_of() to lookup any private data associated with the given dev_pagemap instance. In addition to the cleanups this also gives hmm users multi-order-radix improvements that arrived with commit ab1b597ee0e4 "mm, devm_memremap_pages: use multi-order radix for ZONE_DEVICE lookups" As part of the conversion to the devm_memremap_pages() method of handling the percpu_ref relative to when pages are put, the percpu_ref completion needs to move to hmm_devmem_ref_exit(). See commit 71389703839e ("mm, zone_device: Replace {get, put}_zone_device_page...") for details. Reviewed-by: Christoph Hellwig Reviewed-by: Jérôme Glisse Acked-by: Balbir Singh Cc: Logan Gunthorpe Signed-off-by: Dan Williams --- mm/hmm.c | 194 ++++++++------------------------------------------------------ 1 file changed, 26 insertions(+), 168 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index ec1d9eccf176..2e72cb4188ca 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -938,17 +938,16 @@ static void hmm_devmem_ref_exit(void *data) struct hmm_devmem *devmem; devmem = container_of(ref, struct hmm_devmem, ref); + wait_for_completion(&devmem->completion); percpu_ref_exit(ref); } -static void hmm_devmem_ref_kill(void *data) +static void hmm_devmem_ref_kill(struct percpu_ref *ref) { - struct percpu_ref *ref = data; struct hmm_devmem *devmem; devmem = container_of(ref, struct hmm_devmem, ref); percpu_ref_kill(ref); - wait_for_completion(&devmem->completion); } static int hmm_devmem_fault(struct vm_area_struct *vma, @@ -971,152 +970,6 @@ static void hmm_devmem_free(struct page *page, void *data) devmem->ops->free(devmem, page); } -static DEFINE_MUTEX(hmm_devmem_lock); -static RADIX_TREE(hmm_devmem_radix, GFP_KERNEL); - -static void hmm_devmem_radix_release(struct resource *resource) -{ - resource_size_t key; - - mutex_lock(&hmm_devmem_lock); - for (key = resource->start; - key <= resource->end; - key += PA_SECTION_SIZE) - radix_tree_delete(&hmm_devmem_radix, key >> PA_SECTION_SHIFT); - mutex_unlock(&hmm_devmem_lock); -} - -static void hmm_devmem_release(void *data) -{ - struct hmm_devmem *devmem = data; - struct resource *resource = devmem->resource; - unsigned long start_pfn, npages; - struct zone *zone; - struct page *page; - - /* pages are dead and unused, undo the arch mapping */ - start_pfn = (resource->start & ~(PA_SECTION_SIZE - 1)) >> PAGE_SHIFT; - npages = ALIGN(resource_size(resource), PA_SECTION_SIZE) >> PAGE_SHIFT; - - page = pfn_to_page(start_pfn); - zone = page_zone(page); - - mem_hotplug_begin(); - if (resource->desc == IORES_DESC_DEVICE_PRIVATE_MEMORY) - __remove_pages(zone, start_pfn, npages, NULL); - else - arch_remove_memory(start_pfn << PAGE_SHIFT, - npages << PAGE_SHIFT, NULL); - mem_hotplug_done(); - - hmm_devmem_radix_release(resource); -} - -static int hmm_devmem_pages_create(struct hmm_devmem *devmem) -{ - resource_size_t key, align_start, align_size, align_end; - struct device *device = devmem->device; - int ret, nid, is_ram; - unsigned long pfn; - - align_start = devmem->resource->start & ~(PA_SECTION_SIZE - 1); - align_size = ALIGN(devmem->resource->start + - resource_size(devmem->resource), - PA_SECTION_SIZE) - align_start; - - is_ram = region_intersects(align_start, align_size, - IORESOURCE_SYSTEM_RAM, - IORES_DESC_NONE); - if (is_ram == REGION_MIXED) { - WARN_ONCE(1, "%s attempted on mixed region %pr\n", - __func__, devmem->resource); - return -ENXIO; - } - if (is_ram == REGION_INTERSECTS) - return -ENXIO; - - if (devmem->resource->desc == IORES_DESC_DEVICE_PUBLIC_MEMORY) - devmem->pagemap.type = MEMORY_DEVICE_PUBLIC; - else - devmem->pagemap.type = MEMORY_DEVICE_PRIVATE; - - devmem->pagemap.res = *devmem->resource; - devmem->pagemap.page_fault = hmm_devmem_fault; - devmem->pagemap.page_free = hmm_devmem_free; - devmem->pagemap.dev = devmem->device; - devmem->pagemap.ref = &devmem->ref; - devmem->pagemap.data = devmem; - - mutex_lock(&hmm_devmem_lock); - align_end = align_start + align_size - 1; - for (key = align_start; key <= align_end; key += PA_SECTION_SIZE) { - struct hmm_devmem *dup; - - dup = radix_tree_lookup(&hmm_devmem_radix, - key >> PA_SECTION_SHIFT); - if (dup) { - dev_err(device, "%s: collides with mapping for %s\n", - __func__, dev_name(dup->device)); - mutex_unlock(&hmm_devmem_lock); - ret = -EBUSY; - goto error; - } - ret = radix_tree_insert(&hmm_devmem_radix, - key >> PA_SECTION_SHIFT, - devmem); - if (ret) { - dev_err(device, "%s: failed: %d\n", __func__, ret); - mutex_unlock(&hmm_devmem_lock); - goto error_radix; - } - } - mutex_unlock(&hmm_devmem_lock); - - nid = dev_to_node(device); - if (nid < 0) - nid = numa_mem_id(); - - mem_hotplug_begin(); - /* - * For device private memory we call add_pages() as we only need to - * allocate and initialize struct page for the device memory. More- - * over the device memory is un-accessible thus we do not want to - * create a linear mapping for the memory like arch_add_memory() - * would do. - * - * For device public memory, which is accesible by the CPU, we do - * want the linear mapping and thus use arch_add_memory(). - */ - if (devmem->pagemap.type == MEMORY_DEVICE_PUBLIC) - ret = arch_add_memory(nid, align_start, align_size, NULL, - false); - else - ret = add_pages(nid, align_start >> PAGE_SHIFT, - align_size >> PAGE_SHIFT, NULL, false); - if (ret) { - mem_hotplug_done(); - goto error_add_memory; - } - move_pfn_range_to_zone(&NODE_DATA(nid)->node_zones[ZONE_DEVICE], - align_start >> PAGE_SHIFT, - align_size >> PAGE_SHIFT, NULL); - mem_hotplug_done(); - - for (pfn = devmem->pfn_first; pfn < devmem->pfn_last; pfn++) { - struct page *page = pfn_to_page(pfn); - - page->pgmap = &devmem->pagemap; - } - return 0; - -error_add_memory: - untrack_pfn(NULL, PHYS_PFN(align_start), align_size); -error_radix: - hmm_devmem_radix_release(devmem->resource); -error: - return ret; -} - /* * hmm_devmem_add() - hotplug ZONE_DEVICE memory for device memory * @@ -1140,6 +993,7 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, { struct hmm_devmem *devmem; resource_size_t addr; + void *result; int ret; dev_pagemap_get_ops(); @@ -1194,14 +1048,18 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, devmem->pfn_last = devmem->pfn_first + (resource_size(devmem->resource) >> PAGE_SHIFT); - ret = hmm_devmem_pages_create(devmem); - if (ret) - return ERR_PTR(ret); - - ret = devm_add_action_or_reset(device, hmm_devmem_release, devmem); - if (ret) - return ERR_PTR(ret); + devmem->pagemap.type = MEMORY_DEVICE_PRIVATE; + devmem->pagemap.res = *devmem->resource; + devmem->pagemap.page_fault = hmm_devmem_fault; + devmem->pagemap.page_free = hmm_devmem_free; + devmem->pagemap.altmap_valid = false; + devmem->pagemap.ref = &devmem->ref; + devmem->pagemap.data = devmem; + devmem->pagemap.kill = hmm_devmem_ref_kill; + result = devm_memremap_pages(devmem->device, &devmem->pagemap); + if (IS_ERR(result)) + return result; return devmem; } EXPORT_SYMBOL(hmm_devmem_add); @@ -1211,6 +1069,7 @@ struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops, struct resource *res) { struct hmm_devmem *devmem; + void *result; int ret; if (res->desc != IORES_DESC_DEVICE_PUBLIC_MEMORY) @@ -1243,19 +1102,18 @@ struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops, devmem->pfn_last = devmem->pfn_first + (resource_size(devmem->resource) >> PAGE_SHIFT); - ret = hmm_devmem_pages_create(devmem); - if (ret) - return ERR_PTR(ret); - - ret = devm_add_action_or_reset(device, hmm_devmem_release, devmem); - if (ret) - return ERR_PTR(ret); - - ret = devm_add_action_or_reset(device, hmm_devmem_ref_kill, - &devmem->ref); - if (ret) - return ERR_PTR(ret); + devmem->pagemap.type = MEMORY_DEVICE_PUBLIC; + devmem->pagemap.res = *devmem->resource; + devmem->pagemap.page_fault = hmm_devmem_fault; + devmem->pagemap.page_free = hmm_devmem_free; + devmem->pagemap.altmap_valid = false; + devmem->pagemap.ref = &devmem->ref; + devmem->pagemap.data = devmem; + devmem->pagemap.kill = hmm_devmem_ref_kill; + result = devm_memremap_pages(devmem->device, &devmem->pagemap); + if (IS_ERR(result)) + return result; return devmem; } EXPORT_SYMBOL(hmm_devmem_add_resource); From patchwork Tue Sep 25 06:15:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 10613431 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6C40516B1 for ; Tue, 25 Sep 2018 06:27:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5CCB529867 for ; Tue, 25 Sep 2018 06:27:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5092E2986A; Tue, 25 Sep 2018 06:27:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C44AE29867 for ; Tue, 25 Sep 2018 06:27:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 72AB18E0065; Tue, 25 Sep 2018 02:27:21 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6D8498E0041; Tue, 25 Sep 2018 02:27:21 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5C9D18E0065; Tue, 25 Sep 2018 02:27:21 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) by kanga.kvack.org (Postfix) with ESMTP id 204638E0041 for ; Tue, 25 Sep 2018 02:27:21 -0400 (EDT) Received: by mail-pg1-f199.google.com with SMTP id h37-v6so1852778pgh.4 for ; Mon, 24 Sep 2018 23:27:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:from :to:cc:date:message-id:in-reply-to:references:user-agent :mime-version:content-transfer-encoding; bh=4Vmu/25Y8bAQNnVxhdOhVqgYUAzl5CfY55sEgFXvycM=; b=ccYTRA2ZEWXz0HUpQIXD+9wGt4xCasSgrLBMWZ3B+YV1XrVhfFvU21BB0hEwoiD20i aassLnk5sUYWjR345kYta7XWgUSGsgkVTbz9X9BfR10jGzwBz/OWJUoDBq8jeoVd9hjz YTgS2GdujxTko8N6BM4pD3bbW7BBGNccJpP6sGopmAooyNnyv6hZ3biuBD6TFP+/4mDw 9gM/2EMrYkxn0+ssbID9IhAcQEvoz9Czq9P8YeELW1+yefIFukmoHa7gdgepj/hRWllp KtkJDNpN4LciXPAOvDDbdCcRY6JplS7YZxbeEeoKhlJlHHDuVBpLAM90ybasUaOsW4GJ ZtxQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.93 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: ABuFfoi1yxyA5b9kZ9ikwG6VRxUUFEFAQdBzvL7xMuNMGumc5P4Q3b0a 6NkdhEBqwtcnlE53icmyDsQBgmQXrrQDY4vL4RgUxuoTi3Q1El87tuTj8uCJdh5EChs0M3zuw9x IXzSo8yb5OxQjVlIjwmfl1d9DYMIcaECiHSTpul2ijvxms58ELzpYOqnT7xx605hd4A== X-Received: by 2002:a17:902:33c2:: with SMTP id b60-v6mr1044894plc.11.1537856840681; Mon, 24 Sep 2018 23:27:20 -0700 (PDT) X-Google-Smtp-Source: ACcGV608hcNy5jtkM7nyGbyOPyzfTIuS9F0pLJlRgfnGMKOKRjU1uQKmKHMsdKwHp30poQzXRSf4 X-Received: by 2002:a17:902:33c2:: with SMTP id b60-v6mr1044848plc.11.1537856839917; Mon, 24 Sep 2018 23:27:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537856839; cv=none; d=google.com; s=arc-20160816; b=SK9MSysVz2frjol2NPafGDW0Q2ACrgbRiOI63trQ4BNKspGfliNCvEiyny4JuFAqy/ b+BDN28H0l7jUArariq/fqUmhl+URTj0A1F681Wq1PU+BRetekAZHIWijXb8dK4Nrr/r Nd09kyP7hgdAB42hYt7wZmd8URMl02P7d2muUG1UWpMhyNV5GOVxPsAT5WdNV/X8R4o+ WByw4PSYO4s7dORGMeUmZfo9+YXAnZjcH2Hz8uX7DkNBr9c3EurD6sZ+nD14UczcPabk U/J3IpLdkxI8IXjmh5B154NQj/xIfK8gYg55kV1vdZNLngDjOAF9MzrYIMlCyK33ZNkw p8eQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:user-agent:references :in-reply-to:message-id:date:cc:to:from:subject; bh=4Vmu/25Y8bAQNnVxhdOhVqgYUAzl5CfY55sEgFXvycM=; b=z6bqIBQfr+rs0JW/LN36150M9dMaK5EbUG+z3X7TAXHH/7aOxK+SyAuQhy7M5qKt6K xoSCLk1zKH6oBeaOf/DYikMZafD6iSP59IL12rcKU30SODvv09aX4T99uuAJaI3apdx3 5ROnizD+7Dl9iuu5/tO8FhvJ0aHWuFam4HdpEgv8Ig+4AQmPSUOqUy3gJMiMLgrd0gqR mtJlTjpuNYKJ+YUzzHtkJkexx1qbXEyDFm8Eray8ZlL3K0/4tBmYR5+WrUcvmSY5w6uo Pa3ZetRxy0zPCbcrJMXrmQlVx4qDzqqRSjnizRrjdFpYzcJT1EYo625cSHvVd++8KsuN x2tg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.93 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga11.intel.com (mga11.intel.com. [192.55.52.93]) by mx.google.com with ESMTPS id v6-v6si1467279pgo.532.2018.09.24.23.27.19 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 24 Sep 2018 23:27:19 -0700 (PDT) Received-SPF: pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.93 as permitted sender) client-ip=192.55.52.93; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 192.55.52.93 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 24 Sep 2018 23:27:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,301,1534834800"; d="scan'208";a="94557838" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by orsmga002.jf.intel.com with ESMTP; 24 Sep 2018 23:27:18 -0700 Subject: [PATCH v6 7/7] mm, hmm: Mark hmm_devmem_{add, add_resource} EXPORT_SYMBOL_GPL From: Dan Williams To: akpm@linux-foundation.org Cc: =?utf-8?b?SsOpcsO0bWU=?= Glisse , Logan Gunthorpe , Christoph Hellwig , alexander.h.duyck@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Mon, 24 Sep 2018 23:15:31 -0700 Message-ID: <153785613162.283091.15536211596081003220.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <153785609460.283091.17422092801700439095.stgit@dwillia2-desk3.amr.corp.intel.com> References: <153785609460.283091.17422092801700439095.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-2-gc94f MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The routines hmm_devmem_add(), and hmm_devmem_add_resource() duplicated devm_memremap_pages() and are now simple now wrappers around the core facility to inject a dev_pagemap instance into the global pgmap_radix and hook page-idle events. The devm_memremap_pages() interface is base infrastructure for HMM. HMM has more and deeper ties into the kernel memory management implementation than base ZONE_DEVICE which is itself a EXPORT_SYMBOL_GPL facility. Originally, the HMM page structure creation routines copied the devm_memremap_pages() code and reused ZONE_DEVICE. A cleanup to unify the implementations was discussed during the initial review: http://lkml.iu.edu/hypermail/linux/kernel/1701.2/00812.html Recent work to extend devm_memremap_pages() for the peer-to-peer-DMA facility enabled this cleanup to move forward. In addition to the integration with devm_memremap_pages() HMM depends on other GPL-only symbols: mmu_notifier_unregister_no_release percpu_ref region_intersects __class_create It goes further to consume / indirectly expose functionality that is not exported to any other driver: alloc_pages_vma walk_page_range HMM is derived from devm_memremap_pages(), and extends deep core-kernel fundamentals. Similar to devm_memremap_pages(), mark its entry points EXPORT_SYMBOL_GPL(). Cc: "Jérôme Glisse" Cc: Logan Gunthorpe Reviewed-by: Christoph Hellwig Signed-off-by: Dan Williams --- mm/hmm.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index 2e72cb4188ca..90d1383c7e24 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -1062,7 +1062,7 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, return result; return devmem; } -EXPORT_SYMBOL(hmm_devmem_add); +EXPORT_SYMBOL_GPL(hmm_devmem_add); struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops, struct device *device, @@ -1116,7 +1116,7 @@ struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops, return result; return devmem; } -EXPORT_SYMBOL(hmm_devmem_add_resource); +EXPORT_SYMBOL_GPL(hmm_devmem_add_resource); /* * A device driver that wants to handle multiple devices memory through a