From patchwork Tue Aug 11 09:44:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Roger Pau Monne X-Patchwork-Id: 11708927 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8FEBD618 for ; Tue, 11 Aug 2020 09:45:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5CC72205CB for ; Tue, 11 Aug 2020 09:45:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="XPVyTsCI" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5CC72205CB Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 883D46B0006; Tue, 11 Aug 2020 05:45:15 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 834406B0007; Tue, 11 Aug 2020 05:45:15 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 749F46B0008; Tue, 11 Aug 2020 05:45:15 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0208.hostedemail.com [216.40.44.208]) by kanga.kvack.org (Postfix) with ESMTP id 5F8586B0006 for ; Tue, 11 Aug 2020 05:45:15 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 1138834A3 for ; Tue, 11 Aug 2020 09:45:15 +0000 (UTC) X-FDA: 77137804590.27.plate97_620e34d26fe1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id 491F23D68B for ; Tue, 11 Aug 2020 09:45:13 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,roger.pau@citrix.com,,RULES_HIT:30003:30054:30064:30070,0,RBL:216.71.145.155:@citrix.com:.lbl8.mailshell.net-64.95.201.95 62.18.0.100;04yf49sdjptkkr4af3pf39cq6rmp4opb4i57xg1jezk6cmhz9dfm5iarsxhjczd.8yp8nzibjfepyoqkzzwnsft51514n3d6er6zcg4hrgjdt4k1eot6f86m5ys5skj.q-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:32,LUA_SUMMARY:none X-HE-Tag: plate97_620e34d26fe1 X-Filterd-Recvd-Size: 5136 Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com [216.71.145.155]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Tue, 11 Aug 2020 09:45:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1597139113; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=21LnmuTUq2pHs9BEPTKLR7uvR8pjWH5GPd9gh/+wq5s=; b=XPVyTsCIi7ERlJvK6jizi2Gwd6Jq7ranaauMOUbU3U6gkl69e9ibVj6s SHxXHUEJmRrppjEj5gKoaYSi0euz5wGki48egFCVY1WaCjg3F8pFoeYc7 UQdOa0whIIrSWiSYnEZ2K/5mTPcC7uary52wI8JcsL9hHU3UKltzNLttc s=; Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none IronPort-SDR: VY5VVlCFQoEMbraTS/yrZS8UqVt0ps8DLejYExspGKQRXdGYHx6bGA2Os+p4H57m0KVTnf4ETo A58Ud1v5xBKk0UreNYyFdq6SOXG5m2ohZ6VYo+8IFfkxaK9rYzAj92qOdySbmyuAGHFC6CHXLu pXAvt/6qNEzVNM41g+QN7omxDcX9rY9EwY9KUczeMlcriiFjpFoZyR6P++og5Yj3lXRWZomi+9 8ny1E/KACaSD//Ufem6d3bKnm3jAVxF0h5A6pbWvkWT6anOcvbn8mKJ39nPavZd99plNPTlG1/ IjQ= X-SBRS: 2.7 X-MesageID: 24242470 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.75,460,1589256000"; d="scan'208";a="24242470" From: Roger Pau Monne To: CC: Roger Pau Monne , Dan Williams , Vishal Verma , "Dave Jiang" , Andrew Morton , Jason Gunthorpe , Ira Weiny , "Aneesh Kumar K.V" , Johannes Thumshirn , Logan Gunthorpe , , , Subject: [PATCH v4 1/2] memremap: rename MEMORY_DEVICE_DEVDAX to MEMORY_DEVICE_GENERIC Date: Tue, 11 Aug 2020 11:44:46 +0200 Message-ID: <20200811094447.31208-2-roger.pau@citrix.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200811094447.31208-1-roger.pau@citrix.com> References: <20200811094447.31208-1-roger.pau@citrix.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 491F23D68B X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is in preparation for the logic behind MEMORY_DEVICE_DEVDAX also being used by non DAX devices. No functional change intended. Signed-off-by: Roger Pau Monné Acked-by: Andrew Morton . Reviewed-by: Ira Weiny --- Cc: Dan Williams Cc: Vishal Verma Cc: Dave Jiang Cc: Andrew Morton Cc: Jason Gunthorpe Cc: Ira Weiny Cc: "Aneesh Kumar K.V" Cc: Johannes Thumshirn Cc: Logan Gunthorpe Cc: linux-nvdimm@lists.01.org Cc: xen-devel@lists.xenproject.org Cc: linux-mm@kvack.org --- drivers/dax/device.c | 2 +- include/linux/memremap.h | 9 ++++----- mm/memremap.c | 2 +- 3 files changed, 6 insertions(+), 7 deletions(-) diff --git a/drivers/dax/device.c b/drivers/dax/device.c index 4c0af2eb7e19..1e89513f3c59 100644 --- a/drivers/dax/device.c +++ b/drivers/dax/device.c @@ -429,7 +429,7 @@ int dev_dax_probe(struct device *dev) return -EBUSY; } - dev_dax->pgmap.type = MEMORY_DEVICE_DEVDAX; + dev_dax->pgmap.type = MEMORY_DEVICE_GENERIC; addr = devm_memremap_pages(dev, &dev_dax->pgmap); if (IS_ERR(addr)) return PTR_ERR(addr); diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 5f5b2df06e61..e5862746751b 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -46,11 +46,10 @@ struct vmem_altmap { * wakeup is used to coordinate physical address space management (ex: * fs truncate/hole punch) vs pinned pages (ex: device dma). * - * MEMORY_DEVICE_DEVDAX: + * MEMORY_DEVICE_GENERIC: * Host memory that has similar access semantics as System RAM i.e. DMA - * coherent and supports page pinning. In contrast to - * MEMORY_DEVICE_FS_DAX, this memory is access via a device-dax - * character device. + * coherent and supports page pinning. This is for example used by DAX devices + * that expose memory using a character device. * * MEMORY_DEVICE_PCI_P2PDMA: * Device memory residing in a PCI BAR intended for use with Peer-to-Peer @@ -60,7 +59,7 @@ enum memory_type { /* 0 is reserved to catch uninitialized type fields */ MEMORY_DEVICE_PRIVATE = 1, MEMORY_DEVICE_FS_DAX, - MEMORY_DEVICE_DEVDAX, + MEMORY_DEVICE_GENERIC, MEMORY_DEVICE_PCI_P2PDMA, }; diff --git a/mm/memremap.c b/mm/memremap.c index 03e38b7a38f1..006dace60b1a 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -216,7 +216,7 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid) return ERR_PTR(-EINVAL); } break; - case MEMORY_DEVICE_DEVDAX: + case MEMORY_DEVICE_GENERIC: need_devmap_managed = false; break; case MEMORY_DEVICE_PCI_P2PDMA: From patchwork Tue Aug 11 09:44:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Roger Pau Monne X-Patchwork-Id: 11708929 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 50F3513A4 for ; Tue, 11 Aug 2020 09:45:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F311B205CB for ; Tue, 11 Aug 2020 09:45:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="GoMldxrS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F311B205CB Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 046CB8D0001; Tue, 11 Aug 2020 05:45:19 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id F3A236B000A; Tue, 11 Aug 2020 05:45:18 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E502C8D0001; Tue, 11 Aug 2020 05:45:18 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0147.hostedemail.com [216.40.44.147]) by kanga.kvack.org (Postfix) with ESMTP id CE5E86B0008 for ; Tue, 11 Aug 2020 05:45:18 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 7ACA4180AD80F for ; Tue, 11 Aug 2020 09:45:18 +0000 (UTC) X-FDA: 77137804716.19.pull31_4e164ab26fe1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 438061AD1B1 for ; Tue, 11 Aug 2020 09:45:18 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,roger.pau@citrix.com,,RULES_HIT:30029:30030:30034:30054:30064:30075,0,RBL:216.71.145.153:@citrix.com:.lbl8.mailshell.net-64.95.201.95 62.18.0.100;04y8u1ac8a1fsojki7499kgrdjc9gycmcu3crqcrgtf4zz6czompii3sp5td611.eurte6c36spkj47br7xc9rzns9bgg18z9zdr515b386ggry5r8nyt8wb6whp68i.y-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: pull31_4e164ab26fe1 X-Filterd-Recvd-Size: 17465 Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com [216.71.145.153]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Tue, 11 Aug 2020 09:45:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1597139118; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hNMxT76XSKdCteUGW9msiopo/AZmHOxBlEJvCXXEBjQ=; b=GoMldxrSW4gpLkkjaIE9y7LHxH1loSB0jEtzp4+tMAfsTn7iugCx1/uD GoDxL6scmpExsAsff4xPznhOHM7wAjnNjDk0/y/YOv6YkSi4hjbRwN07P u35HcyTxKxM4UrcEuOZiFJuLxSRAYDp3agtw/rwVeL6mrx5ulDRmx/UGO o=; Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none IronPort-SDR: JNy0mE7/pD2oU2ERTgVkOQ0ZhrpqC+8cRTjFWDvQN9ERrYWyfl6g2IOo3tjkeUs4aIEbPf0Uat 8+kcyQkVg3VRsmxwAl8SJMH4o8V/UWSHgrGlgfnaqA+Mj7oY7udJXqhj1qF3Jh9MlqeAbK42ci kRzqLxDwDRNl9nCcNMWYXjWBWTm26c0iTk36VqA7kxES7JKEfRM2VgP4MdPdKQk7l5nkDnm5Ba tVt34JjOgfMrPIUDTNE91m1NZkgVgd/2PVTNDzLkMGJCurfPg8fhkh+7q4PhHALhEqzt2bfovM 3hw= X-SBRS: 2.7 X-MesageID: 24269317 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.75,460,1589256000"; d="scan'208";a="24269317" From: Roger Pau Monne To: CC: Roger Pau Monne , Oleksandr Andrushchenko , David Airlie , "Daniel Vetter" , Boris Ostrovsky , Juergen Gross , Stefano Stabellini , Dan Carpenter , Wei Liu , "Yan Yankovskyi" , , , , David Hildenbrand , Michal Hocko , Dan Williams Subject: [PATCH v4 2/2] xen: add helpers to allocate unpopulated memory Date: Tue, 11 Aug 2020 11:44:47 +0200 Message-ID: <20200811094447.31208-3-roger.pau@citrix.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200811094447.31208-1-roger.pau@citrix.com> References: <20200811094447.31208-1-roger.pau@citrix.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 438061AD1B1 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To be used in order to create foreign mappings. This is based on the ZONE_DEVICE facility which is used by persistent memory devices in order to create struct pages and kernel virtual mappings for the IOMEM areas of such devices. Note that on kernels without support for ZONE_DEVICE Xen will fallback to use ballooned pages in order to create foreign mappings. The newly added helpers use the same parameters as the existing {alloc/free}_xenballooned_pages functions, which allows for in-place replacement of the callers. Once a memory region has been added to be used as scratch mapping space it will no longer be released, and pages returned are kept in a linked list. This allows to have a buffer of pages and prevents resorting to frequent additions and removals of regions. If enabled (because ZONE_DEVICE is supported) the usage of the new functionality untangles Xen balloon and RAM hotplug from the usage of unpopulated physical memory ranges to map foreign pages, which is the correct thing to do in order to avoid mappings of foreign pages depend on memory hotplug. Note the driver is currently not enabled on Arm platforms because it would interfere with the identity mapping required on some platforms. Signed-off-by: Roger Pau Monné Reviewed-by: Juergen Gross --- Cc: Oleksandr Andrushchenko Cc: David Airlie Cc: Daniel Vetter Cc: Boris Ostrovsky Cc: Juergen Gross Cc: Stefano Stabellini Cc: Dan Carpenter Cc: Roger Pau Monne Cc: Wei Liu Cc: Yan Yankovskyi Cc: dri-devel@lists.freedesktop.org Cc: xen-devel@lists.xenproject.org Cc: linux-mm@kvack.org Cc: David Hildenbrand Cc: Michal Hocko Cc: Dan Williams --- Changes since v3: - Introduce a Kconfig option that gates the addition of the unpopulated alloc driver. This allows to easily disable it on Arm platforms. - Dropped Juergen RB due to the addition of the Kconfig option. - Switched from MEMORY_DEVICE_DEVDAX to MEMORY_DEVICE_GENERIC. Changes since v2: - Drop BUILD_BUG_ON regarding PVMMU page sizes. - Use a SPDX license identifier. - Call fill with only the minimum required number of pages. - Include xen.h header in xen_drm_front_gem.c. - Use less generic function names. - Exit early from the init function if not a PV guest. - Don't use all caps for region name. --- drivers/gpu/drm/xen/xen_drm_front_gem.c | 9 +- drivers/xen/Kconfig | 4 + drivers/xen/Makefile | 1 + drivers/xen/balloon.c | 4 +- drivers/xen/grant-table.c | 4 +- drivers/xen/privcmd.c | 4 +- drivers/xen/unpopulated-alloc.c | 185 ++++++++++++++++++++++++ drivers/xen/xenbus/xenbus_client.c | 6 +- drivers/xen/xlate_mmu.c | 4 +- include/xen/xen.h | 9 ++ 10 files changed, 215 insertions(+), 15 deletions(-) create mode 100644 drivers/xen/unpopulated-alloc.c diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c index f0b85e094111..270e1bd3e4da 100644 --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c @@ -18,6 +18,7 @@ #include #include +#include #include "xen_drm_front.h" #include "xen_drm_front_gem.h" @@ -99,8 +100,8 @@ static struct xen_gem_object *gem_create(struct drm_device *dev, size_t size) * allocate ballooned pages which will be used to map * grant references provided by the backend */ - ret = alloc_xenballooned_pages(xen_obj->num_pages, - xen_obj->pages); + ret = xen_alloc_unpopulated_pages(xen_obj->num_pages, + xen_obj->pages); if (ret < 0) { DRM_ERROR("Cannot allocate %zu ballooned pages: %d\n", xen_obj->num_pages, ret); @@ -152,8 +153,8 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj) } else { if (xen_obj->pages) { if (xen_obj->be_alloc) { - free_xenballooned_pages(xen_obj->num_pages, - xen_obj->pages); + xen_free_unpopulated_pages(xen_obj->num_pages, + xen_obj->pages); gem_free_pages_array(xen_obj); } else { drm_gem_put_pages(&xen_obj->base, diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig index 1d339ef92422..018020b91baa 100644 --- a/drivers/xen/Kconfig +++ b/drivers/xen/Kconfig @@ -327,4 +327,8 @@ config XEN_HAVE_VPMU config XEN_FRONT_PGDIR_SHBUF tristate +config XEN_UNPOPULATED_ALLOC + bool + default y if ZONE_DEVICE && !ARM && !ARM64 + endmenu diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile index c25c9a699b48..babdca808861 100644 --- a/drivers/xen/Makefile +++ b/drivers/xen/Makefile @@ -41,3 +41,4 @@ xen-gntdev-$(CONFIG_XEN_GNTDEV_DMABUF) += gntdev-dmabuf.o xen-gntalloc-y := gntalloc.o xen-privcmd-y := privcmd.o privcmd-buf.o obj-$(CONFIG_XEN_FRONT_PGDIR_SHBUF) += xen-front-pgdir-shbuf.o +obj-$(CONFIG_XEN_UNPOPULATED_ALLOC) += unpopulated-alloc.o diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c index 37ffccda8bb8..51427c752b37 100644 --- a/drivers/xen/balloon.c +++ b/drivers/xen/balloon.c @@ -653,7 +653,7 @@ void free_xenballooned_pages(int nr_pages, struct page **pages) } EXPORT_SYMBOL(free_xenballooned_pages); -#ifdef CONFIG_XEN_PV +#if defined(CONFIG_XEN_PV) && !defined(CONFIG_XEN_UNPOPULATED_ALLOC) static void __init balloon_add_region(unsigned long start_pfn, unsigned long pages) { @@ -707,7 +707,7 @@ static int __init balloon_init(void) register_sysctl_table(xen_root); #endif -#ifdef CONFIG_XEN_PV +#if defined(CONFIG_XEN_PV) && !defined(CONFIG_XEN_UNPOPULATED_ALLOC) { int i; diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c index 8d06bf1cc347..523dcdf39cc9 100644 --- a/drivers/xen/grant-table.c +++ b/drivers/xen/grant-table.c @@ -801,7 +801,7 @@ int gnttab_alloc_pages(int nr_pages, struct page **pages) { int ret; - ret = alloc_xenballooned_pages(nr_pages, pages); + ret = xen_alloc_unpopulated_pages(nr_pages, pages); if (ret < 0) return ret; @@ -836,7 +836,7 @@ EXPORT_SYMBOL_GPL(gnttab_pages_clear_private); void gnttab_free_pages(int nr_pages, struct page **pages) { gnttab_pages_clear_private(nr_pages, pages); - free_xenballooned_pages(nr_pages, pages); + xen_free_unpopulated_pages(nr_pages, pages); } EXPORT_SYMBOL_GPL(gnttab_free_pages); diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c index 63abe6c3642b..b0c73c58f987 100644 --- a/drivers/xen/privcmd.c +++ b/drivers/xen/privcmd.c @@ -424,7 +424,7 @@ static int alloc_empty_pages(struct vm_area_struct *vma, int numpgs) if (pages == NULL) return -ENOMEM; - rc = alloc_xenballooned_pages(numpgs, pages); + rc = xen_alloc_unpopulated_pages(numpgs, pages); if (rc != 0) { pr_warn("%s Could not alloc %d pfns rc:%d\n", __func__, numpgs, rc); @@ -895,7 +895,7 @@ static void privcmd_close(struct vm_area_struct *vma) rc = xen_unmap_domain_gfn_range(vma, numgfns, pages); if (rc == 0) - free_xenballooned_pages(numpgs, pages); + xen_free_unpopulated_pages(numpgs, pages); else pr_crit("unable to unmap MFN range: leaking %d pages. rc=%d\n", numpgs, rc); diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated-alloc.c new file mode 100644 index 000000000000..1b5d157c6977 --- /dev/null +++ b/drivers/xen/unpopulated-alloc.c @@ -0,0 +1,185 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include +#include +#include + +#include + +#include +#include + +static DEFINE_MUTEX(list_lock); +static LIST_HEAD(page_list); +static unsigned int list_count; + +static int fill_list(unsigned int nr_pages) +{ + struct dev_pagemap *pgmap; + void *vaddr; + unsigned int i, alloc_pages = round_up(nr_pages, PAGES_PER_SECTION); + int nid, ret; + + pgmap = kzalloc(sizeof(*pgmap), GFP_KERNEL); + if (!pgmap) + return -ENOMEM; + + pgmap->type = MEMORY_DEVICE_GENERIC; + pgmap->res.name = "Xen scratch"; + pgmap->res.flags = IORESOURCE_MEM | IORESOURCE_BUSY; + + ret = allocate_resource(&iomem_resource, &pgmap->res, + alloc_pages * PAGE_SIZE, 0, -1, + PAGES_PER_SECTION * PAGE_SIZE, NULL, NULL); + if (ret < 0) { + pr_err("Cannot allocate new IOMEM resource\n"); + kfree(pgmap); + return ret; + } + + nid = memory_add_physaddr_to_nid(pgmap->res.start); + +#ifdef CONFIG_XEN_HAVE_PVMMU + /* + * memremap will build page tables for the new memory so + * the p2m must contain invalid entries so the correct + * non-present PTEs will be written. + * + * If a failure occurs, the original (identity) p2m entries + * are not restored since this region is now known not to + * conflict with any devices. + */ + if (!xen_feature(XENFEAT_auto_translated_physmap)) { + xen_pfn_t pfn = PFN_DOWN(pgmap->res.start); + + for (i = 0; i < alloc_pages; i++) { + if (!set_phys_to_machine(pfn + i, INVALID_P2M_ENTRY)) { + pr_warn("set_phys_to_machine() failed, no memory added\n"); + release_resource(&pgmap->res); + kfree(pgmap); + return -ENOMEM; + } + } + } +#endif + + vaddr = memremap_pages(pgmap, nid); + if (IS_ERR(vaddr)) { + pr_err("Cannot remap memory range\n"); + release_resource(&pgmap->res); + kfree(pgmap); + return PTR_ERR(vaddr); + } + + for (i = 0; i < alloc_pages; i++) { + struct page *pg = virt_to_page(vaddr + PAGE_SIZE * i); + + BUG_ON(!virt_addr_valid(vaddr + PAGE_SIZE * i)); + list_add(&pg->lru, &page_list); + list_count++; + } + + return 0; +} + +/** + * xen_alloc_unpopulated_pages - alloc unpopulated pages + * @nr_pages: Number of pages + * @pages: pages returned + * @return 0 on success, error otherwise + */ +int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages) +{ + unsigned int i; + int ret = 0; + + mutex_lock(&list_lock); + if (list_count < nr_pages) { + ret = fill_list(nr_pages - list_count); + if (ret) + goto out; + } + + for (i = 0; i < nr_pages; i++) { + struct page *pg = list_first_entry_or_null(&page_list, + struct page, + lru); + + BUG_ON(!pg); + list_del(&pg->lru); + list_count--; + pages[i] = pg; + +#ifdef CONFIG_XEN_HAVE_PVMMU + if (!xen_feature(XENFEAT_auto_translated_physmap)) { + ret = xen_alloc_p2m_entry(page_to_pfn(pg)); + if (ret < 0) { + unsigned int j; + + for (j = 0; j <= i; j++) { + list_add(&pages[j]->lru, &page_list); + list_count++; + } + goto out; + } + } +#endif + } + +out: + mutex_unlock(&list_lock); + return ret; +} +EXPORT_SYMBOL(xen_alloc_unpopulated_pages); + +/** + * xen_free_unpopulated_pages - return unpopulated pages + * @nr_pages: Number of pages + * @pages: pages to return + */ +void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages) +{ + unsigned int i; + + mutex_lock(&list_lock); + for (i = 0; i < nr_pages; i++) { + list_add(&pages[i]->lru, &page_list); + list_count++; + } + mutex_unlock(&list_lock); +} +EXPORT_SYMBOL(xen_free_unpopulated_pages); + +#ifdef CONFIG_XEN_PV +static int __init init(void) +{ + unsigned int i; + + if (!xen_domain()) + return -ENODEV; + + if (!xen_pv_domain()) + return 0; + + /* + * Initialize with pages from the extra memory regions (see + * arch/x86/xen/setup.c). + */ + for (i = 0; i < XEN_EXTRA_MEM_MAX_REGIONS; i++) { + unsigned int j; + + for (j = 0; j < xen_extra_mem[i].n_pfns; j++) { + struct page *pg = + pfn_to_page(xen_extra_mem[i].start_pfn + j); + + list_add(&pg->lru, &page_list); + list_count++; + } + } + + return 0; +} +subsys_initcall(init); +#endif diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c index 786fbb7d8be0..70b6c4780fbd 100644 --- a/drivers/xen/xenbus/xenbus_client.c +++ b/drivers/xen/xenbus/xenbus_client.c @@ -615,7 +615,7 @@ static int xenbus_map_ring_hvm(struct xenbus_device *dev, bool leaked = false; unsigned int nr_pages = XENBUS_PAGES(nr_grefs); - err = alloc_xenballooned_pages(nr_pages, node->hvm.pages); + err = xen_alloc_unpopulated_pages(nr_pages, node->hvm.pages); if (err) goto out_err; @@ -656,7 +656,7 @@ static int xenbus_map_ring_hvm(struct xenbus_device *dev, addr, nr_pages); out_free_ballooned_pages: if (!leaked) - free_xenballooned_pages(nr_pages, node->hvm.pages); + xen_free_unpopulated_pages(nr_pages, node->hvm.pages); out_err: return err; } @@ -852,7 +852,7 @@ static int xenbus_unmap_ring_hvm(struct xenbus_device *dev, void *vaddr) info.addrs); if (!rv) { vunmap(vaddr); - free_xenballooned_pages(nr_pages, node->hvm.pages); + xen_free_unpopulated_pages(nr_pages, node->hvm.pages); } else WARN(1, "Leaking %p, size %u page(s)\n", vaddr, nr_pages); diff --git a/drivers/xen/xlate_mmu.c b/drivers/xen/xlate_mmu.c index 7b1077f0abcb..34742c6e189e 100644 --- a/drivers/xen/xlate_mmu.c +++ b/drivers/xen/xlate_mmu.c @@ -232,7 +232,7 @@ int __init xen_xlate_map_ballooned_pages(xen_pfn_t **gfns, void **virt, kfree(pages); return -ENOMEM; } - rc = alloc_xenballooned_pages(nr_pages, pages); + rc = xen_alloc_unpopulated_pages(nr_pages, pages); if (rc) { pr_warn("%s Couldn't balloon alloc %ld pages rc:%d\n", __func__, nr_pages, rc); @@ -249,7 +249,7 @@ int __init xen_xlate_map_ballooned_pages(xen_pfn_t **gfns, void **virt, if (!vaddr) { pr_warn("%s Couldn't map %ld pages rc:%d\n", __func__, nr_pages, rc); - free_xenballooned_pages(nr_pages, pages); + xen_free_unpopulated_pages(nr_pages, pages); kfree(pages); kfree(pfns); return -ENOMEM; diff --git a/include/xen/xen.h b/include/xen/xen.h index 19a72f591e2b..43efba045acc 100644 --- a/include/xen/xen.h +++ b/include/xen/xen.h @@ -52,4 +52,13 @@ bool xen_biovec_phys_mergeable(const struct bio_vec *vec1, extern u64 xen_saved_max_mem_size; #endif +#ifdef CONFIG_XEN_UNPOPULATED_ALLOC +int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages); +void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages); +#else +#define xen_alloc_unpopulated_pages alloc_xenballooned_pages +#define xen_free_unpopulated_pages free_xenballooned_pages +#include +#endif + #endif /* _XEN_XEN_H */