From patchwork Sun Aug 18 09:05:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11099465 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 68CCB17E2 for ; Sun, 18 Aug 2019 09:10:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 59651285D5 for ; Sun, 18 Aug 2019 09:10:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4BB4628879; Sun, 18 Aug 2019 09:10:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CB6D4285D5 for ; Sun, 18 Aug 2019 09:10:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EEC2C6B0008; Sun, 18 Aug 2019 05:10:28 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E9E3A6B000A; Sun, 18 Aug 2019 05:10:28 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D8BCC6B000C; Sun, 18 Aug 2019 05:10:28 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0188.hostedemail.com [216.40.44.188]) by kanga.kvack.org (Postfix) with ESMTP id B8B616B0008 for ; Sun, 18 Aug 2019 05:10:28 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 7244F840A for ; Sun, 18 Aug 2019 09:10:28 +0000 (UTC) X-FDA: 75834977736.05.queen31_3e45a01bcc758 X-HE-Tag: queen31_3e45a01bcc758 X-Filterd-Recvd-Size: 5684 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Sun, 18 Aug 2019 09:10:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=MphDKR16rmuINKbbfuGgsPxlH9912jWufK/S5NE9ukw=; b=nRfc65NdyjJ1PhbhTIl5UZYUBS bKySSGUBNKUyeCZ+kSieoTIMoHgmtFzIivwwhRbjS7AEvuqOvqYeARuAJkQfaq0tgNXjf9cMFkhvU hiVaW5xM2yUQ5g1uNDc5nxwBXbZ42jm0wRBJMHZ18GfbV9WP+TbOkL51BFswG9d4+YIEq63b+zjLr WRyhNPgFSD3K9Nq32PfU1ETICEsWZ9soG7OXenoXTo7RPH1hKIXsvDoeg8zFSQCwlpw2vWKJEraCS k2vYnnKsSk4wrCBBsJctRW2SHUJt8Q2R++jZ1agOOFAIdJMSc/I+IITn90/cEsLvudxE0EgtQaGEn FQkpDrZw==; Received: from 213-225-6-198.nat.highway.a1.net ([213.225.6.198] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hzHCg-0007Yk-L8; Sun, 18 Aug 2019 09:10:23 +0000 From: Christoph Hellwig To: Dan Williams , Jason Gunthorpe Cc: Bharata B Rao , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, Ira Weiny Subject: [PATCH 1/4] resource: add a not device managed request_free_mem_region variant Date: Sun, 18 Aug 2019 11:05:54 +0200 Message-Id: <20190818090557.17853-2-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190818090557.17853-1-hch@lst.de> References: <20190818090557.17853-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Factor out the guts of devm_request_free_mem_region so that we can implement both a device managed and a manually release version as tiny wrappers around it. Signed-off-by: Christoph Hellwig Reviewed-by: Ira Weiny Reviewed-by: Dan Williams --- include/linux/ioport.h | 2 ++ kernel/resource.c | 45 +++++++++++++++++++++++++++++------------- 2 files changed, 33 insertions(+), 14 deletions(-) diff --git a/include/linux/ioport.h b/include/linux/ioport.h index 5b6a7121c9f0..7bddddfc76d6 100644 --- a/include/linux/ioport.h +++ b/include/linux/ioport.h @@ -297,6 +297,8 @@ static inline bool resource_overlaps(struct resource *r1, struct resource *r2) struct resource *devm_request_free_mem_region(struct device *dev, struct resource *base, unsigned long size); +struct resource *request_free_mem_region(struct resource *base, + unsigned long size, const char *name); #endif /* __ASSEMBLY__ */ #endif /* _LINUX_IOPORT_H */ diff --git a/kernel/resource.c b/kernel/resource.c index 7ea4306503c5..74877e9d90ca 100644 --- a/kernel/resource.c +++ b/kernel/resource.c @@ -1644,19 +1644,8 @@ void resource_list_free(struct list_head *head) EXPORT_SYMBOL(resource_list_free); #ifdef CONFIG_DEVICE_PRIVATE -/** - * devm_request_free_mem_region - find free region for device private memory - * - * @dev: device struct to bind the resource to - * @size: size in bytes of the device memory to add - * @base: resource tree to look in - * - * This function tries to find an empty range of physical address big enough to - * contain the new resource, so that it can later be hotplugged as ZONE_DEVICE - * memory, which in turn allocates struct pages. - */ -struct resource *devm_request_free_mem_region(struct device *dev, - struct resource *base, unsigned long size) +static struct resource *__request_free_mem_region(struct device *dev, + struct resource *base, unsigned long size, const char *name) { resource_size_t end, addr; struct resource *res; @@ -1670,7 +1659,10 @@ struct resource *devm_request_free_mem_region(struct device *dev, REGION_DISJOINT) continue; - res = devm_request_mem_region(dev, addr, size, dev_name(dev)); + if (dev) + res = devm_request_mem_region(dev, addr, size, name); + else + res = request_mem_region(addr, size, name); if (!res) return ERR_PTR(-ENOMEM); res->desc = IORES_DESC_DEVICE_PRIVATE_MEMORY; @@ -1679,7 +1671,32 @@ struct resource *devm_request_free_mem_region(struct device *dev, return ERR_PTR(-ERANGE); } + +/** + * devm_request_free_mem_region - find free region for device private memory + * + * @dev: device struct to bind the resource to + * @size: size in bytes of the device memory to add + * @base: resource tree to look in + * + * This function tries to find an empty range of physical address big enough to + * contain the new resource, so that it can later be hotplugged as ZONE_DEVICE + * memory, which in turn allocates struct pages. + */ +struct resource *devm_request_free_mem_region(struct device *dev, + struct resource *base, unsigned long size) +{ + return __request_free_mem_region(dev, base, size, dev_name(dev)); +} EXPORT_SYMBOL_GPL(devm_request_free_mem_region); + +struct resource *request_free_mem_region(struct resource *base, + unsigned long size, const char *name) +{ + return __request_free_mem_region(NULL, base, size, name); +} +EXPORT_SYMBOL_GPL(request_free_mem_region); + #endif /* CONFIG_DEVICE_PRIVATE */ static int __init strict_iomem(char *str) From patchwork Sun Aug 18 09:05:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11099469 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2648C17E2 for ; Sun, 18 Aug 2019 09:12:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 14FEF28872 for ; Sun, 18 Aug 2019 09:12:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 097032887B; Sun, 18 Aug 2019 09:12:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9EB5828879 for ; Sun, 18 Aug 2019 09:12:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E4E1A6B0008; Sun, 18 Aug 2019 05:12:40 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E07196B000A; Sun, 18 Aug 2019 05:12:40 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CEDFF6B000C; Sun, 18 Aug 2019 05:12:40 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0213.hostedemail.com [216.40.44.213]) by kanga.kvack.org (Postfix) with ESMTP id B0EF86B0008 for ; Sun, 18 Aug 2019 05:12:40 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 5B0B2181AC9B4 for ; Sun, 18 Aug 2019 09:12:40 +0000 (UTC) X-FDA: 75834983280.19.sort26_517be09066114 X-HE-Tag: sort26_517be09066114 X-Filterd-Recvd-Size: 4457 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Sun, 18 Aug 2019 09:12:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=Ybl6KWkI9r5RiELOPQNVwBgBcMfR1YS4dX2AUNxc4u0=; b=AzgRNbcmu8a6MZwuoiTuScTGjo re7HmmJo0FyV4s7q+yx3NU78IRkpJbcuzl4lC3iaUMeyGKOHEX+eP9aunnP04z8WMn+zmL9R0Ts/D vFHqGdCf6pKfcUkjEzAbb17peu97HD76/EQk1BjGuuOM3vtJy3Nqe8bp6YsdZKICTAvyzDvXzm+TQ HtFyvU61DDFMNBtepPkuICsVgx5PCp/XZe7qXXhajE1vWCw8gJgsF5leIDeMlGD1OHK3aJ8C2Rus4 QbE+O6EgCfpHWOHUgDOqwYxXz+y1RD5Z3FN6DU37ZCy0IRkz8f3rFieIoHQVr5+nG6I8eRQgPVEKp ATdz2K2w==; Received: from 213-225-6-198.nat.highway.a1.net ([213.225.6.198] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hzHEp-00018U-8I; Sun, 18 Aug 2019 09:12:35 +0000 From: Christoph Hellwig To: Dan Williams , Jason Gunthorpe Cc: Bharata B Rao , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, Ira Weiny Subject: [PATCH 2/4] memremap: remove the dev field in struct dev_pagemap Date: Sun, 18 Aug 2019 11:05:55 +0200 Message-Id: <20190818090557.17853-3-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190818090557.17853-1-hch@lst.de> References: <20190818090557.17853-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The dev field in struct dev_pagemap is only used to print dev_name in two places, which are at best nice to have. Just remove the field and thus the name in those two messages. Signed-off-by: Christoph Hellwig Reviewed-by: Ira Weiny Reviewed-by: Dan Williams --- include/linux/memremap.h | 1 - kernel/memremap.c | 6 +----- mm/page_alloc.c | 2 +- 3 files changed, 2 insertions(+), 7 deletions(-) diff --git a/include/linux/memremap.h b/include/linux/memremap.h index f8a5b2a19945..8f0013e18e14 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -109,7 +109,6 @@ struct dev_pagemap { struct percpu_ref *ref; struct percpu_ref internal_ref; struct completion done; - struct device *dev; enum memory_type type; unsigned int flags; u64 pci_p2pdma_bus_offset; diff --git a/kernel/memremap.c b/kernel/memremap.c index 6ee03a816d67..600a14cbe663 100644 --- a/kernel/memremap.c +++ b/kernel/memremap.c @@ -96,7 +96,6 @@ static void dev_pagemap_cleanup(struct dev_pagemap *pgmap) static void devm_memremap_pages_release(void *data) { struct dev_pagemap *pgmap = data; - struct device *dev = pgmap->dev; struct resource *res = &pgmap->res; unsigned long pfn; int nid; @@ -123,8 +122,7 @@ static void devm_memremap_pages_release(void *data) untrack_pfn(NULL, PHYS_PFN(res->start), resource_size(res)); pgmap_array_delete(res); - dev_WARN_ONCE(dev, pgmap->altmap.alloc, - "%s: failed to free all reserved pages\n", __func__); + WARN_ONCE(pgmap->altmap.alloc, "failed to free all reserved pages\n"); } static void dev_pagemap_percpu_release(struct percpu_ref *ref) @@ -245,8 +243,6 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) goto err_array; } - pgmap->dev = dev; - error = xa_err(xa_store_range(&pgmap_array, PHYS_PFN(res->start), PHYS_PFN(res->end), pgmap, GFP_KERNEL)); if (error) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 272c6de1bf4e..b39baa2b1faf 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5982,7 +5982,7 @@ void __ref memmap_init_zone_device(struct zone *zone, } } - pr_info("%s initialised, %lu pages in %ums\n", dev_name(pgmap->dev), + pr_info("%s initialised %lu pages in %ums\n", __func__, size, jiffies_to_msecs(jiffies - start)); } From patchwork Sun Aug 18 09:05:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11099473 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 12DFC17E2 for ; Sun, 18 Aug 2019 09:12:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 03AFE28872 for ; Sun, 18 Aug 2019 09:12:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EC4F42887B; Sun, 18 Aug 2019 09:12:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7F88E28879 for ; Sun, 18 Aug 2019 09:12:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 59B946B000A; Sun, 18 Aug 2019 05:12:42 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 57A546B000C; Sun, 18 Aug 2019 05:12:42 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3F1036B000D; Sun, 18 Aug 2019 05:12:42 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0053.hostedemail.com [216.40.44.53]) by kanga.kvack.org (Postfix) with ESMTP id 1545F6B000A for ; Sun, 18 Aug 2019 05:12:42 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id BFA6E181AC9B4 for ; Sun, 18 Aug 2019 09:12:41 +0000 (UTC) X-FDA: 75834983322.08.hat44_51b2ed99bdf4d X-HE-Tag: hat44_51b2ed99bdf4d X-Filterd-Recvd-Size: 4722 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Sun, 18 Aug 2019 09:12:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=b+vo0s9N2g9CNDtpIzFfCAQBglRNbGiW9Jp0nnEhxQw=; b=eHfAZrzirDylAXJSWjBhAngNkE 0uPVa8U33BCbFJ5FuyG66U51Aakum7WMSPsQu8Wfy8FtWF0TV0X2I+h7bYAkYMrSdQNkJH1DPcLd1 AUCrVZm7GvKtgCb6HufFhjCISPef39xil1rvUuCQ6U6Q/zz/xfKWaFnmo2U8sUOeT+pBLGI0EaZUq gosjp3NIcr7CS2y1Mx66SSW6KIzhQ+NhaB7/eo6LUE7NN3ZqnX48OCKLEUPaX23oGp9rGCBnrCRb0 cK3fX75riiMU04WEm7SFtiqxtrHul9yOL+OY4TNGn//RmdGLVcwjZBNPp8RSgEpcmoW4JQ61lYXJ+ cvHghz9w==; Received: from [2001:4bb8:188:24ee:c70:4a89:bc61:2] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hzHEs-00018e-7E; Sun, 18 Aug 2019 09:12:38 +0000 From: Christoph Hellwig To: Dan Williams , Jason Gunthorpe Cc: Bharata B Rao , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, Ira Weiny Subject: [PATCH 3/4] memremap: don't use a separate devm action for devmap_managed_enable_get Date: Sun, 18 Aug 2019 11:05:56 +0200 Message-Id: <20190818090557.17853-4-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190818090557.17853-1-hch@lst.de> References: <20190818090557.17853-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Just clean up for early failures and then piggy back on devm_memremap_pages_release. This helps with a pending not device managed version of devm_memremap_pages. Signed-off-by: Christoph Hellwig Reviewed-by: Ira Weiny Reviewed-by: Dan Williams --- kernel/memremap.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/kernel/memremap.c b/kernel/memremap.c index 600a14cbe663..09a087ca30ff 100644 --- a/kernel/memremap.c +++ b/kernel/memremap.c @@ -21,13 +21,13 @@ DEFINE_STATIC_KEY_FALSE(devmap_managed_key); EXPORT_SYMBOL(devmap_managed_key); static atomic_t devmap_managed_enable; -static void devmap_managed_enable_put(void *data) +static void devmap_managed_enable_put(void) { if (atomic_dec_and_test(&devmap_managed_enable)) static_branch_disable(&devmap_managed_key); } -static int devmap_managed_enable_get(struct device *dev, struct dev_pagemap *pgmap) +static int devmap_managed_enable_get(struct dev_pagemap *pgmap) { if (!pgmap->ops || !pgmap->ops->page_free) { WARN(1, "Missing page_free method\n"); @@ -36,13 +36,16 @@ static int devmap_managed_enable_get(struct device *dev, struct dev_pagemap *pgm if (atomic_inc_return(&devmap_managed_enable) == 1) static_branch_enable(&devmap_managed_key); - return devm_add_action_or_reset(dev, devmap_managed_enable_put, NULL); + return 0; } #else -static int devmap_managed_enable_get(struct device *dev, struct dev_pagemap *pgmap) +static int devmap_managed_enable_get(struct dev_pagemap *pgmap) { return -EINVAL; } +static void devmap_managed_enable_put(void) +{ +} #endif /* CONFIG_DEV_PAGEMAP_OPS */ static void pgmap_array_delete(struct resource *res) @@ -123,6 +126,7 @@ static void devm_memremap_pages_release(void *data) untrack_pfn(NULL, PHYS_PFN(res->start), resource_size(res)); pgmap_array_delete(res); WARN_ONCE(pgmap->altmap.alloc, "failed to free all reserved pages\n"); + devmap_managed_enable_put(); } static void dev_pagemap_percpu_release(struct percpu_ref *ref) @@ -212,7 +216,7 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) } if (need_devmap_managed) { - error = devmap_managed_enable_get(dev, pgmap); + error = devmap_managed_enable_get(pgmap); if (error) return ERR_PTR(error); } @@ -321,6 +325,7 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) err_array: dev_pagemap_kill(pgmap); dev_pagemap_cleanup(pgmap); + devmap_managed_enable_put(); return ERR_PTR(error); } EXPORT_SYMBOL_GPL(devm_memremap_pages); From patchwork Sun Aug 18 09:05:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11099477 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5AA8D13A0 for ; Sun, 18 Aug 2019 09:12:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4AB6F28872 for ; Sun, 18 Aug 2019 09:12:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3F4E02887B; Sun, 18 Aug 2019 09:12:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8F58028879 for ; Sun, 18 Aug 2019 09:12:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AA4AF6B000C; Sun, 18 Aug 2019 05:12:45 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A55456B000D; Sun, 18 Aug 2019 05:12:45 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 970506B000E; Sun, 18 Aug 2019 05:12:45 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0117.hostedemail.com [216.40.44.117]) by kanga.kvack.org (Postfix) with ESMTP id 749B96B000C for ; Sun, 18 Aug 2019 05:12:45 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 2A36D8248ABF for ; Sun, 18 Aug 2019 09:12:45 +0000 (UTC) X-FDA: 75834983490.30.bit91_522904b4dcc3e X-HE-Tag: bit91_522904b4dcc3e X-Filterd-Recvd-Size: 8488 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Sun, 18 Aug 2019 09:12:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=XHUyB9CWr0UZbWKK6wZpfXFP3u0SKSAB4X0VpzLA/8I=; b=BNpweUzHFydimszWGKSQwsbUDL oVWgIkoTWERwtt2ERYmPL6f9bxxBQbkjmVVG6NLRdRnOp7Qus74ojQgbESKr15Seu8JmfkxYuAy24 RUAC45FOvJK0427ygD1DrBnWk0/XxQj87cjOc/JFggtIlBwrdUQ6rFA+gieqQUeo/WtMIk64Kl6Zz GObVorRR+fb4/L2C2+3vImX2eN7ER6xQwtf6pfm5ppeQiiJU2CU4HniN/xfpcm964ksdwskQb/G+Y juUbIxEuMge7qw6GCxKGdjhqGJMuzxJcC12Y4/+fTjLdPvnrn9PbEAuh7jn5UBhXcNs2AK+h8ZzMl XC11bHeQ==; Received: from [2001:4bb8:188:24ee:c70:4a89:bc61:2] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hzHEu-0001Ew-Vw; Sun, 18 Aug 2019 09:12:41 +0000 From: Christoph Hellwig To: Dan Williams , Jason Gunthorpe Cc: Bharata B Rao , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, Ira Weiny Subject: [PATCH 4/4] memremap: provide a not device managed memremap_pages Date: Sun, 18 Aug 2019 11:05:57 +0200 Message-Id: <20190818090557.17853-5-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190818090557.17853-1-hch@lst.de> References: <20190818090557.17853-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The kvmppc ultravisor code wants a device private memory pool that is system wide and not attached to a device. Instead of faking up one provide a low-level memremap_pages for it. Signed-off-by: Christoph Hellwig Reviewed-by: Ira Weiny Reviewed-by: Dan Williams --- include/linux/memremap.h | 2 + kernel/memremap.c | 84 +++++++++++++++++++++++++--------------- 2 files changed, 54 insertions(+), 32 deletions(-) diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 8f0013e18e14..fb2a0bd826b9 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -123,6 +123,8 @@ static inline struct vmem_altmap *pgmap_altmap(struct dev_pagemap *pgmap) } #ifdef CONFIG_ZONE_DEVICE +void *memremap_pages(struct dev_pagemap *pgmap, int nid); +void memunmap_pages(struct dev_pagemap *pgmap); void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap); void devm_memunmap_pages(struct device *dev, struct dev_pagemap *pgmap); struct dev_pagemap *get_dev_pagemap(unsigned long pfn, diff --git a/kernel/memremap.c b/kernel/memremap.c index 09a087ca30ff..77a77704eb28 100644 --- a/kernel/memremap.c +++ b/kernel/memremap.c @@ -96,9 +96,8 @@ static void dev_pagemap_cleanup(struct dev_pagemap *pgmap) } } -static void devm_memremap_pages_release(void *data) +void memunmap_pages(struct dev_pagemap *pgmap) { - struct dev_pagemap *pgmap = data; struct resource *res = &pgmap->res; unsigned long pfn; int nid; @@ -128,6 +127,12 @@ static void devm_memremap_pages_release(void *data) WARN_ONCE(pgmap->altmap.alloc, "failed to free all reserved pages\n"); devmap_managed_enable_put(); } +EXPORT_SYMBOL_GPL(memunmap_pages); + +static void devm_memremap_pages_release(void *data) +{ + memunmap_pages(data); +} static void dev_pagemap_percpu_release(struct percpu_ref *ref) { @@ -137,27 +142,12 @@ static void dev_pagemap_percpu_release(struct percpu_ref *ref) complete(&pgmap->done); } -/** - * devm_memremap_pages - remap and provide memmap backing for the given resource - * @dev: hosting device for @res - * @pgmap: pointer to a struct dev_pagemap - * - * Notes: - * 1/ At a minimum the res and type members of @pgmap must be initialized - * by the caller before passing it to this function - * - * 2/ The altmap field may optionally be initialized, in which case - * PGMAP_ALTMAP_VALID must be set in pgmap->flags. - * - * 3/ The ref field may optionally be provided, in which pgmap->ref must be - * 'live' on entry and will be killed and reaped at - * devm_memremap_pages_release() time, or if this routine fails. - * - * 4/ res is expected to be a host memory range that could feasibly be - * treated as a "System RAM" range, i.e. not a device mmio range, but - * this is not enforced. +/* + * Not device managed version of dev_memremap_pages, undone by + * memunmap_pages(). Please use dev_memremap_pages if you have a struct + * device available. */ -void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) +void *memremap_pages(struct dev_pagemap *pgmap, int nid) { struct resource *res = &pgmap->res; struct dev_pagemap *conflict_pgmap; @@ -168,7 +158,7 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) .altmap = pgmap_altmap(pgmap), }; pgprot_t pgprot = PAGE_KERNEL; - int error, nid, is_ram; + int error, is_ram; bool need_devmap_managed = true; switch (pgmap->type) { @@ -223,7 +213,7 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) conflict_pgmap = get_dev_pagemap(PHYS_PFN(res->start), NULL); if (conflict_pgmap) { - dev_WARN(dev, "Conflicting mapping in same section\n"); + WARN(1, "Conflicting mapping in same section\n"); put_dev_pagemap(conflict_pgmap); error = -ENOMEM; goto err_array; @@ -231,7 +221,7 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) conflict_pgmap = get_dev_pagemap(PHYS_PFN(res->end), NULL); if (conflict_pgmap) { - dev_WARN(dev, "Conflicting mapping in same section\n"); + WARN(1, "Conflicting mapping in same section\n"); put_dev_pagemap(conflict_pgmap); error = -ENOMEM; goto err_array; @@ -252,7 +242,6 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) if (error) goto err_array; - nid = dev_to_node(dev); if (nid < 0) nid = numa_mem_id(); @@ -308,12 +297,6 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) PHYS_PFN(res->start), PHYS_PFN(resource_size(res)), pgmap); percpu_ref_get_many(pgmap->ref, pfn_end(pgmap) - pfn_first(pgmap)); - - error = devm_add_action_or_reset(dev, devm_memremap_pages_release, - pgmap); - if (error) - return ERR_PTR(error); - return __va(res->start); err_add_memory: @@ -328,6 +311,43 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) devmap_managed_enable_put(); return ERR_PTR(error); } +EXPORT_SYMBOL_GPL(memremap_pages); + +/** + * devm_memremap_pages - remap and provide memmap backing for the given resource + * @dev: hosting device for @res + * @pgmap: pointer to a struct dev_pagemap + * + * Notes: + * 1/ At a minimum the res and type members of @pgmap must be initialized + * by the caller before passing it to this function + * + * 2/ The altmap field may optionally be initialized, in which case + * PGMAP_ALTMAP_VALID must be set in pgmap->flags. + * + * 3/ The ref field may optionally be provided, in which pgmap->ref must be + * 'live' on entry and will be killed and reaped at + * devm_memremap_pages_release() time, or if this routine fails. + * + * 4/ res is expected to be a host memory range that could feasibly be + * treated as a "System RAM" range, i.e. not a device mmio range, but + * this is not enforced. + */ +void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) +{ + int error; + void *ret; + + ret = memremap_pages(pgmap, dev_to_node(dev)); + if (IS_ERR(ret)) + return ret; + + error = devm_add_action_or_reset(dev, devm_memremap_pages_release, + pgmap); + if (error) + return ERR_PTR(error); + return ret; +} EXPORT_SYMBOL_GPL(devm_memremap_pages); void devm_memunmap_pages(struct device *dev, struct dev_pagemap *pgmap)