From patchwork Tue May 12 04:29:34 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 6385501 X-Patchwork-Delegate: dan.j.williams@gmail.com Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id CCDBE9F1C2 for ; Tue, 12 May 2015 04:32:17 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B8CB3203C4 for ; Tue, 12 May 2015 04:32:16 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B3723202D1 for ; Tue, 12 May 2015 04:32:15 +0000 (UTC) Received: from ml01.vlan14.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id A7B04182B84; Mon, 11 May 2015 21:32:15 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by ml01.01.org (Postfix) with ESMTP id E26C2182B62 for ; Mon, 11 May 2015 21:32:14 -0700 (PDT) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP; 11 May 2015 21:32:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,412,1427785200"; d="scan'208";a="708767359" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.23.232.36]) by fmsmga001.fm.intel.com with ESMTP; 11 May 2015 21:32:14 -0700 Subject: [PATCH v3 01/11] arch: introduce __pfn_t for persistenti/device memory From: Dan Williams To: linux-kernel@vger.kernel.org Date: Tue, 12 May 2015 00:29:34 -0400 Message-ID: <20150512042934.11521.4062.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <20150512042629.11521.70356.stgit@dwillia2-desk3.amr.corp.intel.com> References: <20150512042629.11521.70356.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.17.1-8-g92dd MIME-Version: 1.0 Cc: axboe@kernel.dk, linux-arch@vger.kernel.org, riel@redhat.com, linux-nvdimm@lists.01.org, david@fromorbit.com, mingo@kernel.org, j.glisse@gmail.com, mgorman@suse.de, "H. Peter Anvin" , linux-fsdevel@vger.kernel.org, Tejun Heo , akpm@linux-foundation.org, Linus Torvalds , hch@lst.de X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Introduce a type that encapsulates a page-frame-number that is optionally backed by memmap (struct page). This type will be used in place of 'struct page *' instances in contexts where device-backed memory (usually persistent memory) is being referenced (scatterlists for drivers, biovecs for the block layer, etc). The operations in those i/o paths that formerly required a 'struct page *' are to be converted to use __pfn_t aware equivalent helpers. Otherwise, in the absence of persistent memory, there is no functional change and __pfn_t is an alias for a normal memory page. It turns out that while 'struct page' references are used broadly in the kernel I/O stacks the usage of 'struct page' based capabilities is very shallow for block-i/o. It is only used for populating bio_vecs and scatterlists for the retrieval of dma addresses, and for temporary kernel mappings (kmap). Aside from kmap, these usages can be trivially converted to operate on a pfn. Indeed, kmap_atomic() is more problematic as it uses mm infrastructure, via struct page, to setup and track temporary kernel mappings. It would be unfortunate if the kmap infrastructure escaped its 32-bit/HIGHMEM bonds and leaked into 64-bit code. Thankfully, it seems all that is needed here is to convert kmap_atomic() callers, that want to opt-in to supporting persistent memory, to use a new kmap_atomic_pfn_t(). Where kmap_atomic_pfn_t() is enabled to re-use the existing ioremap() mapping established by the driver for persistent memory. Note, that as far as conceptually understanding __pfn_t is concerned, 'persistent memory' is really any address range in host memory not covered by memmap. Contrast this with pure iomem that is on an mmio mapped bus like PCI and cannot be converted to a dma_addr_t by "pfn << PAGE_SHIFT". Cc: H. Peter Anvin Cc: Jens Axboe Cc: Tejun Heo Cc: Ingo Molnar Cc: Andrew Morton Cc: Linus Torvalds Signed-off-by: Dan Williams --- include/asm-generic/memory_model.h | 1 include/asm-generic/pfn.h | 84 ++++++++++++++++++++++++++++++++++++ include/linux/mm.h | 1 init/Kconfig | 13 ++++++ 4 files changed, 98 insertions(+), 1 deletion(-) create mode 100644 include/asm-generic/pfn.h diff --git a/include/asm-generic/memory_model.h b/include/asm-generic/memory_model.h index 14909b0b9cae..1b0ae21fd8ff 100644 --- a/include/asm-generic/memory_model.h +++ b/include/asm-generic/memory_model.h @@ -70,7 +70,6 @@ #endif /* CONFIG_FLATMEM/DISCONTIGMEM/SPARSEMEM */ #define page_to_pfn __page_to_pfn -#define pfn_to_page __pfn_to_page #endif /* __ASSEMBLY__ */ diff --git a/include/asm-generic/pfn.h b/include/asm-generic/pfn.h new file mode 100644 index 000000000000..ee1363e3c67c --- /dev/null +++ b/include/asm-generic/pfn.h @@ -0,0 +1,84 @@ +#ifndef __ASM_PFN_H +#define __ASM_PFN_H + +/* + * Default pfn to physical address conversion, like most arch + * page_to_phys() implementations this resolves to a dma_addr_t as it + * should be the size needed for a device to reference this address. + */ +#ifndef __pfn_to_phys +#define __pfn_to_phys(pfn) ((dma_addr_t)(pfn) << PAGE_SHIFT) +#endif + +static inline struct page *pfn_to_page(unsigned long pfn) +{ + return __pfn_to_page(pfn); +} + +/* + * __pfn_t: encapsulates a page-frame number that is optionally backed + * by memmap (struct page). This type will be used in place of a + * 'struct page *' instance in contexts where unmapped memory (usually + * persistent memory) is being referenced (scatterlists for drivers, + * biovecs for the block layer, etc). Whether a __pfn_t has a struct + * page backing is indicated by flags in the low bits of @data. + */ +typedef struct { + union { + unsigned long data; + struct page *page; + }; +} __pfn_t; + +enum { +#if BITS_PER_LONG == 64 + PFN_SHIFT = 3, +#else + PFN_SHIFT = 2, +#endif + PFN_MASK = (1 << PFN_SHIFT) - 1, + /* device-pfn not covered by memmap */ + PFN_DEV = (1 << 0), +}; + +#ifdef CONFIG_DEV_PFN +static inline bool __pfn_t_has_page(__pfn_t pfn) +{ + return (pfn.data & PFN_MASK) == 0; +} + +#else +static inline bool __pfn_t_has_page(__pfn_t pfn) +{ + return true; +} +#endif + +static inline struct page *__pfn_t_to_page(__pfn_t pfn) +{ + if (!__pfn_t_has_page(pfn)) + return NULL; + return pfn.page; +} + +static inline unsigned long __pfn_t_to_pfn(__pfn_t pfn) +{ + if (__pfn_t_has_page(pfn)) + return page_to_pfn(pfn.page); + return pfn.data >> PFN_SHIFT; +} + +static inline dma_addr_t __pfn_t_to_phys(__pfn_t pfn) +{ + if (!__pfn_t_has_page(pfn)) + return __pfn_to_phys(__pfn_t_to_pfn(pfn)); + return __pfn_to_phys(page_to_pfn(pfn.page)); +} + +static inline __pfn_t page_to_pfn_t(struct page *page) +{ + __pfn_t pfn = { .page = page }; + + return pfn; +} +#endif /* __ASM_PFN_H */ diff --git a/include/linux/mm.h b/include/linux/mm.h index 0755b9fd03a7..9d35cff41c12 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -52,6 +52,7 @@ extern int sysctl_legacy_va_layout; #include #include #include +#include #ifndef __pa_symbol #define __pa_symbol(x) __pa(RELOC_HIDE((unsigned long)(x), 0)) diff --git a/init/Kconfig b/init/Kconfig index dc24dec60232..b5b8a6ed0d97 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1764,6 +1764,19 @@ config PROFILING Say Y here to enable the extended profiling support mechanisms used by profilers such as OProfile. +config DEV_PFN + default n + bool "Support for device provided (pmem, graphics, etc) memory" if EXPERT + help + Say Y here to enable I/O to/from device provided memory, + i.e. reference memory that is not mapped. This is usually + the case if you have large quantities of persistent memory + relative to DRAM. Enabling this option may increase the + kernel size by a few kilobytes as it instructs the kernel + that a __pfn_t may reference unmapped memory. Disabling + this option instructs the kernel that a __pfn_t always + references mapped platform memory. + # # Place an empty function call at each tracepoint site. Can be # dynamically changed for a probe function.